2026-04-09 00:00:11.775163 | Job console starting 2026-04-09 00:00:11.808548 | Updating git repos 2026-04-09 00:00:11.907032 | Cloning repos into workspace 2026-04-09 00:00:12.327392 | Restoring repo states 2026-04-09 00:00:12.391537 | Merging changes 2026-04-09 00:00:12.391561 | Checking out repos 2026-04-09 00:00:13.089859 | Preparing playbooks 2026-04-09 00:00:15.283170 | Running Ansible setup 2026-04-09 00:00:22.694642 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-09 00:00:24.363368 | 2026-04-09 00:00:24.363483 | PLAY [Base pre] 2026-04-09 00:00:24.394290 | 2026-04-09 00:00:24.394400 | TASK [Setup log path fact] 2026-04-09 00:00:24.434353 | orchestrator | ok 2026-04-09 00:00:24.469438 | 2026-04-09 00:00:24.469556 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 00:00:24.528286 | orchestrator | ok 2026-04-09 00:00:24.549945 | 2026-04-09 00:00:24.550053 | TASK [emit-job-header : Print job information] 2026-04-09 00:00:24.634671 | # Job Information 2026-04-09 00:00:24.634892 | Ansible Version: 2.16.14 2026-04-09 00:00:24.634927 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-09 00:00:24.634960 | Pipeline: periodic-midnight 2026-04-09 00:00:24.634982 | Executor: 521e9411259a 2026-04-09 00:00:24.634999 | Triggered by: https://github.com/osism/testbed 2026-04-09 00:00:24.635017 | Event ID: 229a3ccad3314f149ff7c6cbe4e5e7b7 2026-04-09 00:00:24.641488 | 2026-04-09 00:00:24.641584 | LOOP [emit-job-header : Print node information] 2026-04-09 00:00:24.901686 | orchestrator | ok: 2026-04-09 00:00:24.901901 | orchestrator | # Node Information 2026-04-09 00:00:24.901932 | orchestrator | Inventory Hostname: orchestrator 2026-04-09 00:00:24.901952 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-09 00:00:24.901970 | orchestrator | Username: zuul-testbed05 2026-04-09 00:00:24.901986 | orchestrator | Distro: Debian 12.13 2026-04-09 00:00:24.902005 | orchestrator | Provider: static-testbed 2026-04-09 00:00:24.902023 | orchestrator | Region: 2026-04-09 00:00:24.902039 | orchestrator | Label: testbed-orchestrator 2026-04-09 00:00:24.902055 | orchestrator | Product Name: OpenStack Nova 2026-04-09 00:00:24.902071 | orchestrator | Interface IP: 81.163.193.140 2026-04-09 00:00:24.918873 | 2026-04-09 00:00:24.918966 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-09 00:00:26.250274 | orchestrator -> localhost | changed 2026-04-09 00:00:26.265256 | 2026-04-09 00:00:26.265359 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-09 00:00:29.298888 | orchestrator -> localhost | changed 2026-04-09 00:00:29.310102 | 2026-04-09 00:00:29.310197 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-09 00:00:29.911675 | orchestrator -> localhost | ok 2026-04-09 00:00:29.917310 | 2026-04-09 00:00:29.918024 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-09 00:00:29.946666 | orchestrator | ok 2026-04-09 00:00:30.045326 | orchestrator | included: /var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-09 00:00:30.110532 | 2026-04-09 00:00:30.110643 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-09 00:00:35.949805 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-09 00:00:35.949973 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/work/534efbc076ff4f1292525425ed63042a_id_rsa 2026-04-09 00:00:35.950004 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/work/534efbc076ff4f1292525425ed63042a_id_rsa.pub 2026-04-09 00:00:35.950026 | orchestrator -> localhost | The key fingerprint is: 2026-04-09 00:00:35.950048 | orchestrator -> localhost | SHA256:8kJgAhVRhZ1Hj4VhohSyer3r46CIjKLV3ff8G7DHEXM zuul-build-sshkey 2026-04-09 00:00:35.950067 | orchestrator -> localhost | The key's randomart image is: 2026-04-09 00:00:35.950094 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-09 00:00:35.950113 | orchestrator -> localhost | |..=++=oo+o. | 2026-04-09 00:00:35.950131 | orchestrator -> localhost | | . +..ooo+ | 2026-04-09 00:00:35.950147 | orchestrator -> localhost | | o + .. . o E| 2026-04-09 00:00:35.950163 | orchestrator -> localhost | | . + . + | 2026-04-09 00:00:35.950180 | orchestrator -> localhost | |. . . o S . . | 2026-04-09 00:00:35.950204 | orchestrator -> localhost | | . . + + + . | 2026-04-09 00:00:35.950221 | orchestrator -> localhost | | o o o o . . + | 2026-04-09 00:00:35.950238 | orchestrator -> localhost | |*o ... . . o . . | 2026-04-09 00:00:35.950255 | orchestrator -> localhost | |O. o+. o.o. | 2026-04-09 00:00:35.950272 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-09 00:00:35.950319 | orchestrator -> localhost | ok: Runtime: 0:00:04.702464 2026-04-09 00:00:35.957783 | 2026-04-09 00:00:35.957871 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-09 00:00:35.990074 | orchestrator | ok 2026-04-09 00:00:36.009620 | orchestrator | included: /var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-09 00:00:36.037809 | 2026-04-09 00:00:36.037923 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-09 00:00:36.050807 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:36.058490 | 2026-04-09 00:00:36.058590 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-09 00:00:37.128120 | orchestrator | changed 2026-04-09 00:00:37.133289 | 2026-04-09 00:00:37.133367 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-09 00:00:37.450095 | orchestrator | ok 2026-04-09 00:00:37.456113 | 2026-04-09 00:00:37.456197 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-09 00:00:37.944203 | orchestrator | ok 2026-04-09 00:00:37.954056 | 2026-04-09 00:00:37.954150 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-09 00:00:38.446042 | orchestrator | ok 2026-04-09 00:00:38.453697 | 2026-04-09 00:00:38.453808 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-09 00:00:38.509136 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:38.514767 | 2026-04-09 00:00:38.514971 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-09 00:00:39.563739 | orchestrator -> localhost | changed 2026-04-09 00:00:39.574534 | 2026-04-09 00:00:39.574628 | TASK [add-build-sshkey : Add back temp key] 2026-04-09 00:00:40.505566 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/work/534efbc076ff4f1292525425ed63042a_id_rsa (zuul-build-sshkey) 2026-04-09 00:00:40.505761 | orchestrator -> localhost | ok: Runtime: 0:00:00.024540 2026-04-09 00:00:40.512542 | 2026-04-09 00:00:40.512703 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-09 00:00:41.120527 | orchestrator | ok 2026-04-09 00:00:41.125998 | 2026-04-09 00:00:41.126080 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-09 00:00:41.190021 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:41.325570 | 2026-04-09 00:00:41.325680 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-09 00:00:41.844412 | orchestrator | ok 2026-04-09 00:00:41.870930 | 2026-04-09 00:00:41.871042 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-09 00:00:41.924602 | orchestrator | ok 2026-04-09 00:00:41.930358 | 2026-04-09 00:00:41.930443 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-09 00:00:42.533620 | orchestrator -> localhost | ok 2026-04-09 00:00:42.541160 | 2026-04-09 00:00:42.541254 | TASK [validate-host : Collect information about the host] 2026-04-09 00:00:44.005554 | orchestrator | ok 2026-04-09 00:00:44.031799 | 2026-04-09 00:00:44.031903 | TASK [validate-host : Sanitize hostname] 2026-04-09 00:00:44.107314 | orchestrator | ok 2026-04-09 00:00:44.112368 | 2026-04-09 00:00:44.112487 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-09 00:00:45.512187 | orchestrator -> localhost | changed 2026-04-09 00:00:45.517132 | 2026-04-09 00:00:45.517212 | TASK [validate-host : Collect information about zuul worker] 2026-04-09 00:00:46.173319 | orchestrator | ok 2026-04-09 00:00:46.178023 | 2026-04-09 00:00:46.178104 | TASK [validate-host : Write out all zuul information for each host] 2026-04-09 00:00:47.327780 | orchestrator -> localhost | changed 2026-04-09 00:00:47.336039 | 2026-04-09 00:00:47.336170 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-09 00:00:47.645049 | orchestrator | ok 2026-04-09 00:00:47.650069 | 2026-04-09 00:00:47.650151 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-09 00:02:18.663732 | orchestrator | changed: 2026-04-09 00:02:18.668359 | orchestrator | .d..t...... src/ 2026-04-09 00:02:18.668456 | orchestrator | .d..t...... src/github.com/ 2026-04-09 00:02:18.668488 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-09 00:02:18.668514 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-09 00:02:18.668538 | orchestrator | RedHat.yml 2026-04-09 00:02:18.713169 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-09 00:02:18.713194 | orchestrator | RedHat.yml 2026-04-09 00:02:18.713276 | orchestrator | = 1.53.0"... 2026-04-09 00:02:29.447276 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-09 00:02:29.465926 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-09 00:02:29.609666 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-09 00:02:30.709082 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-09 00:02:30.774624 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-09 00:02:31.320068 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 00:02:31.385218 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-09 00:02:31.875739 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 00:02:31.875807 | orchestrator | 2026-04-09 00:02:31.875814 | orchestrator | Providers are signed by their developers. 2026-04-09 00:02:31.875819 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-09 00:02:31.875830 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-09 00:02:31.875864 | orchestrator | 2026-04-09 00:02:31.875870 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-09 00:02:31.875874 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-09 00:02:31.875892 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-09 00:02:31.875902 | orchestrator | you run "tofu init" in the future. 2026-04-09 00:02:31.876293 | orchestrator | 2026-04-09 00:02:31.876333 | orchestrator | OpenTofu has been successfully initialized! 2026-04-09 00:02:31.876356 | orchestrator | 2026-04-09 00:02:31.876361 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-09 00:02:31.876366 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-09 00:02:31.876370 | orchestrator | should now work. 2026-04-09 00:02:31.876374 | orchestrator | 2026-04-09 00:02:31.876378 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-09 00:02:31.876382 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-09 00:02:31.876393 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-09 00:02:32.075663 | orchestrator | Created and switched to workspace "ci"! 2026-04-09 00:02:32.075752 | orchestrator | 2026-04-09 00:02:32.075765 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-09 00:02:32.075774 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-09 00:02:32.075782 | orchestrator | for this configuration. 2026-04-09 00:02:32.203906 | orchestrator | ci.auto.tfvars 2026-04-09 00:02:32.609418 | orchestrator | default_custom.tf 2026-04-09 00:02:34.162648 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-09 00:02:34.718662 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-09 00:02:34.937187 | orchestrator | 2026-04-09 00:02:34.937254 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-09 00:02:34.937262 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-09 00:02:34.937286 | orchestrator | + create 2026-04-09 00:02:34.937301 | orchestrator | <= read (data resources) 2026-04-09 00:02:34.937313 | orchestrator | 2026-04-09 00:02:34.937318 | orchestrator | OpenTofu will perform the following actions: 2026-04-09 00:02:34.937421 | orchestrator | 2026-04-09 00:02:34.937441 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-09 00:02:34.937446 | orchestrator | # (config refers to values not yet known) 2026-04-09 00:02:34.937450 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-09 00:02:34.937455 | orchestrator | + checksum = (known after apply) 2026-04-09 00:02:34.937459 | orchestrator | + created_at = (known after apply) 2026-04-09 00:02:34.937463 | orchestrator | + file = (known after apply) 2026-04-09 00:02:34.937467 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.937485 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.937489 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 00:02:34.937493 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 00:02:34.937497 | orchestrator | + most_recent = true 2026-04-09 00:02:34.937501 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.937505 | orchestrator | + protected = (known after apply) 2026-04-09 00:02:34.937509 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.937536 | orchestrator | + schema = (known after apply) 2026-04-09 00:02:34.937541 | orchestrator | + size_bytes = (known after apply) 2026-04-09 00:02:34.937545 | orchestrator | + tags = (known after apply) 2026-04-09 00:02:34.937549 | orchestrator | + updated_at = (known after apply) 2026-04-09 00:02:34.937553 | orchestrator | } 2026-04-09 00:02:34.937640 | orchestrator | 2026-04-09 00:02:34.937652 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-09 00:02:34.937656 | orchestrator | # (config refers to values not yet known) 2026-04-09 00:02:34.937660 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-09 00:02:34.937664 | orchestrator | + checksum = (known after apply) 2026-04-09 00:02:34.937668 | orchestrator | + created_at = (known after apply) 2026-04-09 00:02:34.937672 | orchestrator | + file = (known after apply) 2026-04-09 00:02:34.937676 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.937679 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.937683 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 00:02:34.937687 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 00:02:34.937690 | orchestrator | + most_recent = true 2026-04-09 00:02:34.937694 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.937698 | orchestrator | + protected = (known after apply) 2026-04-09 00:02:34.937702 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.937706 | orchestrator | + schema = (known after apply) 2026-04-09 00:02:34.937709 | orchestrator | + size_bytes = (known after apply) 2026-04-09 00:02:34.937713 | orchestrator | + tags = (known after apply) 2026-04-09 00:02:34.937717 | orchestrator | + updated_at = (known after apply) 2026-04-09 00:02:34.937721 | orchestrator | } 2026-04-09 00:02:34.937790 | orchestrator | 2026-04-09 00:02:34.937802 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-09 00:02:34.937806 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-09 00:02:34.937811 | orchestrator | + content = (known after apply) 2026-04-09 00:02:34.937815 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:34.937818 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:34.937822 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:34.937826 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:34.937830 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:34.937835 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:34.937841 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:34.937847 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:34.937853 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-09 00:02:34.937859 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.937863 | orchestrator | } 2026-04-09 00:02:34.937932 | orchestrator | 2026-04-09 00:02:34.937943 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-09 00:02:34.937947 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-09 00:02:34.937951 | orchestrator | + content = (known after apply) 2026-04-09 00:02:34.937955 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:34.937959 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:34.937962 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:34.937966 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:34.937970 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:34.937974 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:34.937978 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:34.937981 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:34.937990 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-09 00:02:34.937994 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.937998 | orchestrator | } 2026-04-09 00:02:34.938089 | orchestrator | 2026-04-09 00:02:34.938114 | orchestrator | # local_file.inventory will be created 2026-04-09 00:02:34.938119 | orchestrator | + resource "local_file" "inventory" { 2026-04-09 00:02:34.938123 | orchestrator | + content = (known after apply) 2026-04-09 00:02:34.938127 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:34.938131 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:34.938134 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:34.938138 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:34.938142 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:34.938146 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:34.938150 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:34.938154 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:34.938157 | orchestrator | + filename = "inventory.ci" 2026-04-09 00:02:34.938161 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938165 | orchestrator | } 2026-04-09 00:02:34.938233 | orchestrator | 2026-04-09 00:02:34.938244 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-09 00:02:34.938249 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-09 00:02:34.938252 | orchestrator | + content = (sensitive value) 2026-04-09 00:02:34.938256 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:34.938260 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:34.938264 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:34.938268 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:34.938272 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:34.938275 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:34.938279 | orchestrator | + directory_permission = "0700" 2026-04-09 00:02:34.938283 | orchestrator | + file_permission = "0600" 2026-04-09 00:02:34.938287 | orchestrator | + filename = ".id_rsa.ci" 2026-04-09 00:02:34.938291 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938294 | orchestrator | } 2026-04-09 00:02:34.938316 | orchestrator | 2026-04-09 00:02:34.938327 | orchestrator | # null_resource.node_semaphore will be created 2026-04-09 00:02:34.938331 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-09 00:02:34.938335 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938339 | orchestrator | } 2026-04-09 00:02:34.938419 | orchestrator | 2026-04-09 00:02:34.938431 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-09 00:02:34.938436 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-09 00:02:34.938440 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.938444 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.938447 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938451 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.938455 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.938459 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-09 00:02:34.938462 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.938466 | orchestrator | + size = 80 2026-04-09 00:02:34.938470 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.938474 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.938478 | orchestrator | } 2026-04-09 00:02:34.938580 | orchestrator | 2026-04-09 00:02:34.938593 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-09 00:02:34.938597 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:34.938601 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.938605 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.938609 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938620 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.938624 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.938628 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-09 00:02:34.938632 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.938636 | orchestrator | + size = 80 2026-04-09 00:02:34.938640 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.938643 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.938647 | orchestrator | } 2026-04-09 00:02:34.938711 | orchestrator | 2026-04-09 00:02:34.938722 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-09 00:02:34.938726 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:34.938730 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.938734 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.938738 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938741 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.938745 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.938749 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-09 00:02:34.938753 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.938756 | orchestrator | + size = 80 2026-04-09 00:02:34.938760 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.938764 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.938768 | orchestrator | } 2026-04-09 00:02:34.938830 | orchestrator | 2026-04-09 00:02:34.938841 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-09 00:02:34.938845 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:34.938849 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.938853 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.938857 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938861 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.938864 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.938868 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-09 00:02:34.938872 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.938876 | orchestrator | + size = 80 2026-04-09 00:02:34.938879 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.938883 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.938887 | orchestrator | } 2026-04-09 00:02:34.938949 | orchestrator | 2026-04-09 00:02:34.938960 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-09 00:02:34.938964 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:34.938968 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.938972 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.938976 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.938979 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.938983 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.938990 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-09 00:02:34.938994 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.938998 | orchestrator | + size = 80 2026-04-09 00:02:34.939002 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939005 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939009 | orchestrator | } 2026-04-09 00:02:34.939066 | orchestrator | 2026-04-09 00:02:34.939077 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-09 00:02:34.939081 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:34.939085 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.939089 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.939093 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.939101 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.939104 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.939108 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-09 00:02:34.939112 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.939116 | orchestrator | + size = 80 2026-04-09 00:02:34.939119 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939123 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939127 | orchestrator | } 2026-04-09 00:02:34.939188 | orchestrator | 2026-04-09 00:02:34.939199 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-09 00:02:34.939203 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:34.939207 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.939211 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.939215 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.939219 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.939222 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.939226 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-09 00:02:34.939230 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.939234 | orchestrator | + size = 80 2026-04-09 00:02:34.939237 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939241 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939245 | orchestrator | } 2026-04-09 00:02:34.939302 | orchestrator | 2026-04-09 00:02:34.939314 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-09 00:02:34.939318 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.939322 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.939326 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.939330 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.939333 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.939337 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-09 00:02:34.939341 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.939345 | orchestrator | + size = 20 2026-04-09 00:02:34.939349 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939353 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939356 | orchestrator | } 2026-04-09 00:02:34.939413 | orchestrator | 2026-04-09 00:02:34.939424 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-09 00:02:34.939428 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.939432 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.939436 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.939440 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.939444 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.939447 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-09 00:02:34.939451 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.939455 | orchestrator | + size = 20 2026-04-09 00:02:34.939458 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939462 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939466 | orchestrator | } 2026-04-09 00:02:34.939569 | orchestrator | 2026-04-09 00:02:34.939582 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-09 00:02:34.939586 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.939590 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.939594 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.939597 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.939601 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.939605 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-09 00:02:34.939609 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.939617 | orchestrator | + size = 20 2026-04-09 00:02:34.939621 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939625 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939629 | orchestrator | } 2026-04-09 00:02:34.939686 | orchestrator | 2026-04-09 00:02:34.939698 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-09 00:02:34.939702 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.939706 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.939710 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.939713 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.939717 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.939721 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-09 00:02:34.939725 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.939729 | orchestrator | + size = 20 2026-04-09 00:02:34.939732 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939736 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939740 | orchestrator | } 2026-04-09 00:02:34.939796 | orchestrator | 2026-04-09 00:02:34.939807 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-09 00:02:34.939811 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.939815 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.939819 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.939822 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.939826 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.939830 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-09 00:02:34.939834 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.939841 | orchestrator | + size = 20 2026-04-09 00:02:34.939845 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.939848 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.939852 | orchestrator | } 2026-04-09 00:02:34.939994 | orchestrator | 2026-04-09 00:02:34.940086 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-09 00:02:34.940091 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.940095 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.940099 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.940103 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.940107 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.940111 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-09 00:02:34.940114 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.940118 | orchestrator | + size = 20 2026-04-09 00:02:34.940122 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.940158 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.940163 | orchestrator | } 2026-04-09 00:02:34.940316 | orchestrator | 2026-04-09 00:02:34.940331 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-09 00:02:34.940335 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.940339 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.940343 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.940347 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.940350 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.940385 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-09 00:02:34.940390 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.940393 | orchestrator | + size = 20 2026-04-09 00:02:34.940397 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.940401 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.940405 | orchestrator | } 2026-04-09 00:02:34.940609 | orchestrator | 2026-04-09 00:02:34.940643 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-09 00:02:34.940655 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.940694 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.940738 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.940750 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.940754 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.940758 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-09 00:02:34.940762 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.940765 | orchestrator | + size = 20 2026-04-09 00:02:34.940778 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.940809 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.940814 | orchestrator | } 2026-04-09 00:02:34.940992 | orchestrator | 2026-04-09 00:02:34.941005 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-09 00:02:34.941010 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:34.941014 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:34.941050 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.941084 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.941107 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:34.941111 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-09 00:02:34.941135 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.941139 | orchestrator | + size = 20 2026-04-09 00:02:34.941143 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:34.941147 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:34.941151 | orchestrator | } 2026-04-09 00:02:34.941596 | orchestrator | 2026-04-09 00:02:34.941612 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-09 00:02:34.941616 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-09 00:02:34.941620 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:34.941624 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:34.941651 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:34.941690 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.941695 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.941699 | orchestrator | + config_drive = true 2026-04-09 00:02:34.941703 | orchestrator | + created = (known after apply) 2026-04-09 00:02:34.941707 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:34.941734 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-09 00:02:34.941739 | orchestrator | + force_delete = false 2026-04-09 00:02:34.941743 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:34.941747 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.941751 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.941755 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:34.941781 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:34.941807 | orchestrator | + name = "testbed-manager" 2026-04-09 00:02:34.941812 | orchestrator | + power_state = "active" 2026-04-09 00:02:34.941816 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.941828 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:34.941832 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:34.941836 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:34.941839 | orchestrator | + user_data = (sensitive value) 2026-04-09 00:02:34.941843 | orchestrator | 2026-04-09 00:02:34.941847 | orchestrator | + block_device { 2026-04-09 00:02:34.941887 | orchestrator | + boot_index = 0 2026-04-09 00:02:34.941892 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:34.941900 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:34.941940 | orchestrator | + multiattach = false 2026-04-09 00:02:34.941948 | orchestrator | + source_type = "volume" 2026-04-09 00:02:34.941952 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.941961 | orchestrator | } 2026-04-09 00:02:34.941965 | orchestrator | 2026-04-09 00:02:34.941969 | orchestrator | + network { 2026-04-09 00:02:34.941973 | orchestrator | + access_network = false 2026-04-09 00:02:34.941977 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:34.941980 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:34.941984 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:34.941988 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.941992 | orchestrator | + port = (known after apply) 2026-04-09 00:02:34.941995 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.941999 | orchestrator | } 2026-04-09 00:02:34.942003 | orchestrator | } 2026-04-09 00:02:34.942480 | orchestrator | 2026-04-09 00:02:34.942547 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-09 00:02:34.942571 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:34.942584 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:34.942588 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:34.942600 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:34.942605 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.942615 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.942620 | orchestrator | + config_drive = true 2026-04-09 00:02:34.942623 | orchestrator | + created = (known after apply) 2026-04-09 00:02:34.942627 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:34.942631 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:34.942634 | orchestrator | + force_delete = false 2026-04-09 00:02:34.942652 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:34.942656 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.942660 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.942671 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:34.942685 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:34.942722 | orchestrator | + name = "testbed-node-0" 2026-04-09 00:02:34.942750 | orchestrator | + power_state = "active" 2026-04-09 00:02:34.942754 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.942777 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:34.942780 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:34.942802 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:34.942806 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:34.942810 | orchestrator | 2026-04-09 00:02:34.942814 | orchestrator | + block_device { 2026-04-09 00:02:34.942818 | orchestrator | + boot_index = 0 2026-04-09 00:02:34.942862 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:34.942867 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:34.942871 | orchestrator | + multiattach = false 2026-04-09 00:02:34.942875 | orchestrator | + source_type = "volume" 2026-04-09 00:02:34.942901 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.942922 | orchestrator | } 2026-04-09 00:02:34.942926 | orchestrator | 2026-04-09 00:02:34.942930 | orchestrator | + network { 2026-04-09 00:02:34.942942 | orchestrator | + access_network = false 2026-04-09 00:02:34.942963 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:34.942975 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:34.942979 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:34.942983 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.943044 | orchestrator | + port = (known after apply) 2026-04-09 00:02:34.943049 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.943083 | orchestrator | } 2026-04-09 00:02:34.943112 | orchestrator | } 2026-04-09 00:02:34.943600 | orchestrator | 2026-04-09 00:02:34.943616 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-09 00:02:34.943621 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:34.943625 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:34.943661 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:34.943680 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:34.943685 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.943689 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.943693 | orchestrator | + config_drive = true 2026-04-09 00:02:34.943706 | orchestrator | + created = (known after apply) 2026-04-09 00:02:34.943710 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:34.943714 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:34.943725 | orchestrator | + force_delete = false 2026-04-09 00:02:34.943729 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:34.943733 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.943751 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.943755 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:34.943759 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:34.943763 | orchestrator | + name = "testbed-node-1" 2026-04-09 00:02:34.943781 | orchestrator | + power_state = "active" 2026-04-09 00:02:34.943785 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.943789 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:34.943792 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:34.943796 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:34.943800 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:34.943804 | orchestrator | 2026-04-09 00:02:34.943815 | orchestrator | + block_device { 2026-04-09 00:02:34.943819 | orchestrator | + boot_index = 0 2026-04-09 00:02:34.943823 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:34.943827 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:34.943831 | orchestrator | + multiattach = false 2026-04-09 00:02:34.943834 | orchestrator | + source_type = "volume" 2026-04-09 00:02:34.943838 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.943842 | orchestrator | } 2026-04-09 00:02:34.943846 | orchestrator | 2026-04-09 00:02:34.943850 | orchestrator | + network { 2026-04-09 00:02:34.943854 | orchestrator | + access_network = false 2026-04-09 00:02:34.943857 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:34.943861 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:34.943865 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:34.943869 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.943872 | orchestrator | + port = (known after apply) 2026-04-09 00:02:34.943876 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.943880 | orchestrator | } 2026-04-09 00:02:34.943884 | orchestrator | } 2026-04-09 00:02:34.944512 | orchestrator | 2026-04-09 00:02:34.944575 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-09 00:02:34.944587 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:34.944591 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:34.944595 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:34.944646 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:34.944651 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.944688 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.944722 | orchestrator | + config_drive = true 2026-04-09 00:02:34.944726 | orchestrator | + created = (known after apply) 2026-04-09 00:02:34.944730 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:34.944734 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:34.944738 | orchestrator | + force_delete = false 2026-04-09 00:02:34.944742 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:34.944746 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.944749 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.944791 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:34.944796 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:34.944799 | orchestrator | + name = "testbed-node-2" 2026-04-09 00:02:34.944803 | orchestrator | + power_state = "active" 2026-04-09 00:02:34.944807 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.944811 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:34.944814 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:34.944818 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:34.944862 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:34.944867 | orchestrator | 2026-04-09 00:02:34.944871 | orchestrator | + block_device { 2026-04-09 00:02:34.944875 | orchestrator | + boot_index = 0 2026-04-09 00:02:34.944878 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:34.944882 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:34.944886 | orchestrator | + multiattach = false 2026-04-09 00:02:34.944890 | orchestrator | + source_type = "volume" 2026-04-09 00:02:34.944908 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.944912 | orchestrator | } 2026-04-09 00:02:34.944929 | orchestrator | 2026-04-09 00:02:34.944942 | orchestrator | + network { 2026-04-09 00:02:34.944946 | orchestrator | + access_network = false 2026-04-09 00:02:34.944950 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:34.944953 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:34.944965 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:34.945013 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.945039 | orchestrator | + port = (known after apply) 2026-04-09 00:02:34.945078 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.945083 | orchestrator | } 2026-04-09 00:02:34.945155 | orchestrator | } 2026-04-09 00:02:34.945427 | orchestrator | 2026-04-09 00:02:34.945464 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-09 00:02:34.945469 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:34.945473 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:34.945507 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:34.945533 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:34.945544 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.945549 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.945562 | orchestrator | + config_drive = true 2026-04-09 00:02:34.945573 | orchestrator | + created = (known after apply) 2026-04-09 00:02:34.945577 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:34.945581 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:34.945585 | orchestrator | + force_delete = false 2026-04-09 00:02:34.945589 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:34.945592 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.945596 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.945646 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:34.945659 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:34.945663 | orchestrator | + name = "testbed-node-3" 2026-04-09 00:02:34.945667 | orchestrator | + power_state = "active" 2026-04-09 00:02:34.945671 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.945716 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:34.945728 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:34.945745 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:34.945749 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:34.945753 | orchestrator | 2026-04-09 00:02:34.945757 | orchestrator | + block_device { 2026-04-09 00:02:34.945772 | orchestrator | + boot_index = 0 2026-04-09 00:02:34.945784 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:34.945789 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:34.945813 | orchestrator | + multiattach = false 2026-04-09 00:02:34.945838 | orchestrator | + source_type = "volume" 2026-04-09 00:02:34.945843 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.945846 | orchestrator | } 2026-04-09 00:02:34.945872 | orchestrator | 2026-04-09 00:02:34.945877 | orchestrator | + network { 2026-04-09 00:02:34.945880 | orchestrator | + access_network = false 2026-04-09 00:02:34.945884 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:34.945888 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:34.945892 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:34.945895 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.945899 | orchestrator | + port = (known after apply) 2026-04-09 00:02:34.945903 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.945925 | orchestrator | } 2026-04-09 00:02:34.945930 | orchestrator | } 2026-04-09 00:02:34.946435 | orchestrator | 2026-04-09 00:02:34.946450 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-09 00:02:34.946454 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:34.946458 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:34.946462 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:34.946475 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:34.946479 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.946482 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.946486 | orchestrator | + config_drive = true 2026-04-09 00:02:34.946490 | orchestrator | + created = (known after apply) 2026-04-09 00:02:34.946502 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:34.946506 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:34.946510 | orchestrator | + force_delete = false 2026-04-09 00:02:34.946514 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:34.946553 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.946557 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.946561 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:34.946565 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:34.946569 | orchestrator | + name = "testbed-node-4" 2026-04-09 00:02:34.946573 | orchestrator | + power_state = "active" 2026-04-09 00:02:34.946577 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.946581 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:34.946584 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:34.946588 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:34.946592 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:34.946596 | orchestrator | 2026-04-09 00:02:34.946600 | orchestrator | + block_device { 2026-04-09 00:02:34.946604 | orchestrator | + boot_index = 0 2026-04-09 00:02:34.946608 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:34.946612 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:34.946615 | orchestrator | + multiattach = false 2026-04-09 00:02:34.946619 | orchestrator | + source_type = "volume" 2026-04-09 00:02:34.946623 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.946627 | orchestrator | } 2026-04-09 00:02:34.946630 | orchestrator | 2026-04-09 00:02:34.946634 | orchestrator | + network { 2026-04-09 00:02:34.946638 | orchestrator | + access_network = false 2026-04-09 00:02:34.946673 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:34.946707 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:34.946711 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:34.946715 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.946719 | orchestrator | + port = (known after apply) 2026-04-09 00:02:34.946789 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.946794 | orchestrator | } 2026-04-09 00:02:34.946798 | orchestrator | } 2026-04-09 00:02:34.947455 | orchestrator | 2026-04-09 00:02:34.947506 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-09 00:02:34.947532 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:34.947536 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:34.947540 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:34.947544 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:34.947549 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.947552 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:34.947556 | orchestrator | + config_drive = true 2026-04-09 00:02:34.947696 | orchestrator | + created = (known after apply) 2026-04-09 00:02:34.947734 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:34.947739 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:34.947742 | orchestrator | + force_delete = false 2026-04-09 00:02:34.947770 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:34.947850 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.947952 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:34.948037 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:34.948057 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:34.948061 | orchestrator | + name = "testbed-node-5" 2026-04-09 00:02:34.948065 | orchestrator | + power_state = "active" 2026-04-09 00:02:34.948069 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.948158 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:34.948200 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:34.948243 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:34.948322 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:34.948330 | orchestrator | 2026-04-09 00:02:34.948334 | orchestrator | + block_device { 2026-04-09 00:02:34.948337 | orchestrator | + boot_index = 0 2026-04-09 00:02:34.948341 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:34.948345 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:34.948349 | orchestrator | + multiattach = false 2026-04-09 00:02:34.948353 | orchestrator | + source_type = "volume" 2026-04-09 00:02:34.948357 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.948360 | orchestrator | } 2026-04-09 00:02:34.948364 | orchestrator | 2026-04-09 00:02:34.948368 | orchestrator | + network { 2026-04-09 00:02:34.948372 | orchestrator | + access_network = false 2026-04-09 00:02:34.948376 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:34.948379 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:34.948383 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:34.948387 | orchestrator | + name = (known after apply) 2026-04-09 00:02:34.948391 | orchestrator | + port = (known after apply) 2026-04-09 00:02:34.948395 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:34.948399 | orchestrator | } 2026-04-09 00:02:34.948403 | orchestrator | } 2026-04-09 00:02:34.948633 | orchestrator | 2026-04-09 00:02:34.948669 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-09 00:02:34.948707 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-09 00:02:34.948712 | orchestrator | + fingerprint = (known after apply) 2026-04-09 00:02:34.948716 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.948720 | orchestrator | + name = "testbed" 2026-04-09 00:02:34.948726 | orchestrator | + private_key = (sensitive value) 2026-04-09 00:02:34.948730 | orchestrator | + public_key = (known after apply) 2026-04-09 00:02:34.948779 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.948783 | orchestrator | + user_id = (known after apply) 2026-04-09 00:02:34.948787 | orchestrator | } 2026-04-09 00:02:34.948847 | orchestrator | 2026-04-09 00:02:34.948908 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-09 00:02:34.948914 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.948944 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.948949 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.948952 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.948956 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.948960 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.948964 | orchestrator | } 2026-04-09 00:02:34.949048 | orchestrator | 2026-04-09 00:02:34.949061 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-09 00:02:34.949066 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.949070 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.949074 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.949077 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.949081 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.949085 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.949088 | orchestrator | } 2026-04-09 00:02:34.949132 | orchestrator | 2026-04-09 00:02:34.949144 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-09 00:02:34.949148 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.949152 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.949156 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.949160 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.949164 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.949167 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.949171 | orchestrator | } 2026-04-09 00:02:34.949210 | orchestrator | 2026-04-09 00:02:34.949222 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-09 00:02:34.949226 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.949230 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.949234 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.949237 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.949241 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.949245 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.949249 | orchestrator | } 2026-04-09 00:02:34.949284 | orchestrator | 2026-04-09 00:02:34.949296 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-09 00:02:34.949300 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.949304 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.949308 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.949312 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.949320 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.949324 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.949327 | orchestrator | } 2026-04-09 00:02:34.949401 | orchestrator | 2026-04-09 00:02:34.949413 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-09 00:02:34.949417 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.949421 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.949425 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.949429 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.949432 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.949436 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.949440 | orchestrator | } 2026-04-09 00:02:34.949592 | orchestrator | 2026-04-09 00:02:34.949605 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-09 00:02:34.949609 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.949613 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.949617 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.949621 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.949624 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.949633 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.949637 | orchestrator | } 2026-04-09 00:02:34.949827 | orchestrator | 2026-04-09 00:02:34.949840 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-09 00:02:34.949845 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.949849 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.949852 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.949856 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.949860 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.949864 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.949894 | orchestrator | } 2026-04-09 00:02:34.950069 | orchestrator | 2026-04-09 00:02:34.950111 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-09 00:02:34.950116 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:34.950127 | orchestrator | + device = (known after apply) 2026-04-09 00:02:34.950139 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.950143 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:34.950147 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.950158 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:34.950162 | orchestrator | } 2026-04-09 00:02:34.950200 | orchestrator | 2026-04-09 00:02:34.950229 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-09 00:02:34.950234 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-09 00:02:34.950246 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 00:02:34.950250 | orchestrator | + floating_ip = (known after apply) 2026-04-09 00:02:34.950254 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.950304 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:34.950337 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.950342 | orchestrator | } 2026-04-09 00:02:34.950635 | orchestrator | 2026-04-09 00:02:34.950656 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-09 00:02:34.950660 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-09 00:02:34.950664 | orchestrator | + address = (known after apply) 2026-04-09 00:02:34.950668 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.950672 | orchestrator | + dns_domain = (known after apply) 2026-04-09 00:02:34.950676 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.950708 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 00:02:34.950756 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.950768 | orchestrator | + pool = "public" 2026-04-09 00:02:34.950773 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:34.950777 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.950780 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.950784 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.950788 | orchestrator | } 2026-04-09 00:02:34.950927 | orchestrator | 2026-04-09 00:02:34.950982 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-09 00:02:34.950987 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-09 00:02:34.950992 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.950996 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.951000 | orchestrator | + availability_zone_hints = [ 2026-04-09 00:02:34.951004 | orchestrator | + "nova", 2026-04-09 00:02:34.951039 | orchestrator | ] 2026-04-09 00:02:34.951053 | orchestrator | + dns_domain = (known after apply) 2026-04-09 00:02:34.951057 | orchestrator | + external = (known after apply) 2026-04-09 00:02:34.951061 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.951065 | orchestrator | + mtu = (known after apply) 2026-04-09 00:02:34.951069 | orchestrator | + name = "net-testbed-management" 2026-04-09 00:02:34.951073 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.951128 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.951133 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.951209 | orchestrator | + shared = (known after apply) 2026-04-09 00:02:34.951213 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.951226 | orchestrator | + transparent_vlan = (known after apply) 2026-04-09 00:02:34.951231 | orchestrator | 2026-04-09 00:02:34.951235 | orchestrator | + segments (known after apply) 2026-04-09 00:02:34.951238 | orchestrator | } 2026-04-09 00:02:34.951484 | orchestrator | 2026-04-09 00:02:34.951557 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-09 00:02:34.951572 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-09 00:02:34.951576 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.951587 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:34.951591 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:34.951634 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.951647 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:34.951651 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:34.951662 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:34.951666 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.951670 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.951673 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:34.951677 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.951681 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.951715 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.951719 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.951722 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:34.951734 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.951738 | orchestrator | 2026-04-09 00:02:34.951750 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.951754 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:34.951758 | orchestrator | } 2026-04-09 00:02:34.951762 | orchestrator | 2026-04-09 00:02:34.951766 | orchestrator | + binding (known after apply) 2026-04-09 00:02:34.951770 | orchestrator | 2026-04-09 00:02:34.951774 | orchestrator | + fixed_ip { 2026-04-09 00:02:34.951817 | orchestrator | + ip_address = "192.168.16.5" 2026-04-09 00:02:34.951829 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.951843 | orchestrator | } 2026-04-09 00:02:34.951847 | orchestrator | } 2026-04-09 00:02:34.952125 | orchestrator | 2026-04-09 00:02:34.952139 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-09 00:02:34.952143 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:34.952147 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.952151 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:34.952155 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:34.952158 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.952162 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:34.952166 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:34.952170 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:34.952174 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.952177 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.952181 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:34.952185 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.952189 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.952193 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.952196 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.952288 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:34.952292 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.952296 | orchestrator | 2026-04-09 00:02:34.952300 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.952303 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:34.952307 | orchestrator | } 2026-04-09 00:02:34.952311 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.952315 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:34.952319 | orchestrator | } 2026-04-09 00:02:34.952323 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.952326 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:34.952330 | orchestrator | } 2026-04-09 00:02:34.952334 | orchestrator | 2026-04-09 00:02:34.952338 | orchestrator | + binding (known after apply) 2026-04-09 00:02:34.952341 | orchestrator | 2026-04-09 00:02:34.952345 | orchestrator | + fixed_ip { 2026-04-09 00:02:34.952349 | orchestrator | + ip_address = "192.168.16.10" 2026-04-09 00:02:34.952353 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.952357 | orchestrator | } 2026-04-09 00:02:34.952361 | orchestrator | } 2026-04-09 00:02:34.952730 | orchestrator | 2026-04-09 00:02:34.952747 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-09 00:02:34.952752 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:34.952787 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.952792 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:34.952796 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:34.952800 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.952804 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:34.952807 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:34.952811 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:34.952815 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.952819 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.952822 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:34.952826 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.952830 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.952833 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.952837 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.952841 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:34.952845 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.952848 | orchestrator | 2026-04-09 00:02:34.952852 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.952856 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:34.952860 | orchestrator | } 2026-04-09 00:02:34.952864 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.952867 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:34.952871 | orchestrator | } 2026-04-09 00:02:34.952875 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.952879 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:34.952882 | orchestrator | } 2026-04-09 00:02:34.952886 | orchestrator | 2026-04-09 00:02:34.952890 | orchestrator | + binding (known after apply) 2026-04-09 00:02:34.952894 | orchestrator | 2026-04-09 00:02:34.952897 | orchestrator | + fixed_ip { 2026-04-09 00:02:34.952901 | orchestrator | + ip_address = "192.168.16.11" 2026-04-09 00:02:34.952905 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.952909 | orchestrator | } 2026-04-09 00:02:34.952912 | orchestrator | } 2026-04-09 00:02:34.953159 | orchestrator | 2026-04-09 00:02:34.953174 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-09 00:02:34.953178 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:34.953182 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.953186 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:34.953190 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:34.953232 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.953251 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:34.953255 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:34.953259 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:34.953263 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.953280 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.953284 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:34.953288 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.953291 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.953295 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.953299 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.953312 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:34.953317 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.953328 | orchestrator | 2026-04-09 00:02:34.953366 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.953378 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:34.953383 | orchestrator | } 2026-04-09 00:02:34.953387 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.953398 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:34.953402 | orchestrator | } 2026-04-09 00:02:34.953406 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.953410 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:34.953444 | orchestrator | } 2026-04-09 00:02:34.953449 | orchestrator | 2026-04-09 00:02:34.953453 | orchestrator | + binding (known after apply) 2026-04-09 00:02:34.953494 | orchestrator | 2026-04-09 00:02:34.953498 | orchestrator | + fixed_ip { 2026-04-09 00:02:34.953561 | orchestrator | + ip_address = "192.168.16.12" 2026-04-09 00:02:34.953565 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.953569 | orchestrator | } 2026-04-09 00:02:34.953618 | orchestrator | } 2026-04-09 00:02:34.953952 | orchestrator | 2026-04-09 00:02:34.953966 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-09 00:02:34.953971 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:34.954001 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.954060 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:34.954065 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:34.954069 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.954112 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:34.954124 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:34.954184 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:34.954188 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.954192 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.954214 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:34.954218 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.954246 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.954251 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.954255 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.954286 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:34.954297 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.954301 | orchestrator | 2026-04-09 00:02:34.954305 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.954316 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:34.954343 | orchestrator | } 2026-04-09 00:02:34.954375 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.954407 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:34.954412 | orchestrator | } 2026-04-09 00:02:34.954416 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.954435 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:34.954440 | orchestrator | } 2026-04-09 00:02:34.954444 | orchestrator | 2026-04-09 00:02:34.954462 | orchestrator | + binding (known after apply) 2026-04-09 00:02:34.954476 | orchestrator | 2026-04-09 00:02:34.954480 | orchestrator | + fixed_ip { 2026-04-09 00:02:34.954484 | orchestrator | + ip_address = "192.168.16.13" 2026-04-09 00:02:34.954488 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.954491 | orchestrator | } 2026-04-09 00:02:34.954495 | orchestrator | } 2026-04-09 00:02:34.954914 | orchestrator | 2026-04-09 00:02:34.954929 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-09 00:02:34.954934 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:34.954947 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.954952 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:34.954964 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:34.954969 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.954973 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:34.954984 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:34.954997 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:34.955001 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.955012 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.955015 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:34.955034 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.955046 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.955051 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.955055 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.955058 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:34.955062 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.955067 | orchestrator | 2026-04-09 00:02:34.955093 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.955098 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:34.955131 | orchestrator | } 2026-04-09 00:02:34.955163 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.955168 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:34.955172 | orchestrator | } 2026-04-09 00:02:34.955176 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.955179 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:34.955183 | orchestrator | } 2026-04-09 00:02:34.955187 | orchestrator | 2026-04-09 00:02:34.955191 | orchestrator | + binding (known after apply) 2026-04-09 00:02:34.955195 | orchestrator | 2026-04-09 00:02:34.955213 | orchestrator | + fixed_ip { 2026-04-09 00:02:34.955217 | orchestrator | + ip_address = "192.168.16.14" 2026-04-09 00:02:34.955221 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.955225 | orchestrator | } 2026-04-09 00:02:34.955229 | orchestrator | } 2026-04-09 00:02:34.955423 | orchestrator | 2026-04-09 00:02:34.955435 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-09 00:02:34.955440 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:34.955444 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.955448 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:34.955452 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:34.955467 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.955478 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:34.955482 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:34.955486 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:34.955490 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:34.955494 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.955497 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:34.955510 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.955514 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:34.955591 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:34.955610 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.955622 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:34.955626 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.955661 | orchestrator | 2026-04-09 00:02:34.955666 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.955692 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:34.955697 | orchestrator | } 2026-04-09 00:02:34.955701 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.955714 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:34.955718 | orchestrator | } 2026-04-09 00:02:34.955722 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:34.955733 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:34.955737 | orchestrator | } 2026-04-09 00:02:34.955741 | orchestrator | 2026-04-09 00:02:34.955756 | orchestrator | + binding (known after apply) 2026-04-09 00:02:34.955760 | orchestrator | 2026-04-09 00:02:34.955764 | orchestrator | + fixed_ip { 2026-04-09 00:02:34.955768 | orchestrator | + ip_address = "192.168.16.15" 2026-04-09 00:02:34.955772 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.955783 | orchestrator | } 2026-04-09 00:02:34.955787 | orchestrator | } 2026-04-09 00:02:34.955857 | orchestrator | 2026-04-09 00:02:34.955869 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-09 00:02:34.955874 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-09 00:02:34.955878 | orchestrator | + force_destroy = false 2026-04-09 00:02:34.955882 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.955895 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:34.955899 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.955903 | orchestrator | + router_id = (known after apply) 2026-04-09 00:02:34.955907 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:34.955911 | orchestrator | } 2026-04-09 00:02:34.956179 | orchestrator | 2026-04-09 00:02:34.956192 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-09 00:02:34.956196 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-09 00:02:34.956200 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:34.956204 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.956208 | orchestrator | + availability_zone_hints = [ 2026-04-09 00:02:34.956212 | orchestrator | + "nova", 2026-04-09 00:02:34.956224 | orchestrator | ] 2026-04-09 00:02:34.956229 | orchestrator | + distributed = (known after apply) 2026-04-09 00:02:34.956232 | orchestrator | + enable_snat = (known after apply) 2026-04-09 00:02:34.956243 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-09 00:02:34.956285 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-09 00:02:34.956308 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.956312 | orchestrator | + name = "testbed" 2026-04-09 00:02:34.956350 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.956388 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.956393 | orchestrator | 2026-04-09 00:02:34.956427 | orchestrator | + external_fixed_ip (known after apply) 2026-04-09 00:02:34.956439 | orchestrator | } 2026-04-09 00:02:34.956733 | orchestrator | 2026-04-09 00:02:34.956754 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-09 00:02:34.956759 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-09 00:02:34.956771 | orchestrator | + description = "ssh" 2026-04-09 00:02:34.956782 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.956787 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.956790 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.956794 | orchestrator | + port_range_max = 22 2026-04-09 00:02:34.956849 | orchestrator | + port_range_min = 22 2026-04-09 00:02:34.956853 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:34.956857 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.956866 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.956870 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.956874 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:34.956878 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.956881 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.956933 | orchestrator | } 2026-04-09 00:02:34.957194 | orchestrator | 2026-04-09 00:02:34.957253 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-09 00:02:34.957265 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-09 00:02:34.957269 | orchestrator | + description = "wireguard" 2026-04-09 00:02:34.957301 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.957305 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.957309 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.957313 | orchestrator | + port_range_max = 51820 2026-04-09 00:02:34.957317 | orchestrator | + port_range_min = 51820 2026-04-09 00:02:34.957351 | orchestrator | + protocol = "udp" 2026-04-09 00:02:34.957356 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.957367 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.957371 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.957382 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:34.957386 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.957390 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.957394 | orchestrator | } 2026-04-09 00:02:34.957626 | orchestrator | 2026-04-09 00:02:34.957640 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-09 00:02:34.957644 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-09 00:02:34.957671 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.957702 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.957716 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.957721 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:34.957725 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.957728 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.957732 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.957736 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 00:02:34.957740 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.957744 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.957747 | orchestrator | } 2026-04-09 00:02:34.957830 | orchestrator | 2026-04-09 00:02:34.957863 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-09 00:02:34.957897 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-09 00:02:34.957901 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.957938 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.957976 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.957980 | orchestrator | + protocol = "udp" 2026-04-09 00:02:34.957984 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.958035 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.958058 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.958070 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 00:02:34.958082 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.958087 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.958090 | orchestrator | } 2026-04-09 00:02:34.958257 | orchestrator | 2026-04-09 00:02:34.958285 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-09 00:02:34.958304 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-09 00:02:34.958309 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.958320 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.958324 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.958328 | orchestrator | + protocol = "icmp" 2026-04-09 00:02:34.958332 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.958336 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.958340 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.958343 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:34.958347 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.958351 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.958355 | orchestrator | } 2026-04-09 00:02:34.958528 | orchestrator | 2026-04-09 00:02:34.958601 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-09 00:02:34.958607 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-09 00:02:34.958612 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.958615 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.958619 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.958623 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:34.958627 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.958631 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.958642 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.958682 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:34.958722 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.958726 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.958730 | orchestrator | } 2026-04-09 00:02:34.958894 | orchestrator | 2026-04-09 00:02:34.958907 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-09 00:02:34.958912 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-09 00:02:34.958916 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.958946 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.958951 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.958955 | orchestrator | + protocol = "udp" 2026-04-09 00:02:34.958959 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.958994 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.958998 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.959002 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:34.959073 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.959079 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.959100 | orchestrator | } 2026-04-09 00:02:34.959316 | orchestrator | 2026-04-09 00:02:34.959339 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-09 00:02:34.959344 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-09 00:02:34.959348 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.959389 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.959414 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.959426 | orchestrator | + protocol = "icmp" 2026-04-09 00:02:34.959437 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.959443 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.959447 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.959451 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:34.959455 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.959458 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.959467 | orchestrator | } 2026-04-09 00:02:34.959577 | orchestrator | 2026-04-09 00:02:34.959591 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-09 00:02:34.959605 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-09 00:02:34.959609 | orchestrator | + description = "vrrp" 2026-04-09 00:02:34.959613 | orchestrator | + direction = "ingress" 2026-04-09 00:02:34.959617 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:34.959620 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.959624 | orchestrator | + protocol = "112" 2026-04-09 00:02:34.959628 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.959632 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:34.959635 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:34.959639 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:34.959643 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:34.959655 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.959674 | orchestrator | } 2026-04-09 00:02:34.959731 | orchestrator | 2026-04-09 00:02:34.959743 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-09 00:02:34.959773 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-09 00:02:34.959777 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.959781 | orchestrator | + description = "management security group" 2026-04-09 00:02:34.959799 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.959804 | orchestrator | + name = "testbed-management" 2026-04-09 00:02:34.959808 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.959812 | orchestrator | + stateful = (known after apply) 2026-04-09 00:02:34.959816 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.959876 | orchestrator | } 2026-04-09 00:02:34.960113 | orchestrator | 2026-04-09 00:02:34.960128 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-09 00:02:34.960132 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-09 00:02:34.960148 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.960152 | orchestrator | + description = "node security group" 2026-04-09 00:02:34.960164 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.960181 | orchestrator | + name = "testbed-node" 2026-04-09 00:02:34.960186 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.960190 | orchestrator | + stateful = (known after apply) 2026-04-09 00:02:34.960223 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.960228 | orchestrator | } 2026-04-09 00:02:34.960392 | orchestrator | 2026-04-09 00:02:34.960406 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-09 00:02:34.960410 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-09 00:02:34.960414 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:34.960418 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-09 00:02:34.960436 | orchestrator | + dns_nameservers = [ 2026-04-09 00:02:34.960461 | orchestrator | + "8.8.8.8", 2026-04-09 00:02:34.960466 | orchestrator | + "9.9.9.9", 2026-04-09 00:02:34.960477 | orchestrator | ] 2026-04-09 00:02:34.960490 | orchestrator | + enable_dhcp = true 2026-04-09 00:02:34.960501 | orchestrator | + gateway_ip = (known after apply) 2026-04-09 00:02:34.960512 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.960536 | orchestrator | + ip_version = 4 2026-04-09 00:02:34.960541 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-09 00:02:34.960559 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-09 00:02:34.960563 | orchestrator | + name = "subnet-testbed-management" 2026-04-09 00:02:34.960567 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:34.960598 | orchestrator | + no_gateway = false 2026-04-09 00:02:34.960603 | orchestrator | + region = (known after apply) 2026-04-09 00:02:34.960615 | orchestrator | + service_types = (known after apply) 2026-04-09 00:02:34.960625 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:34.960629 | orchestrator | 2026-04-09 00:02:34.960633 | orchestrator | + allocation_pool { 2026-04-09 00:02:34.960637 | orchestrator | + end = "192.168.31.250" 2026-04-09 00:02:34.960641 | orchestrator | + start = "192.168.31.200" 2026-04-09 00:02:34.960644 | orchestrator | } 2026-04-09 00:02:34.960750 | orchestrator | } 2026-04-09 00:02:34.960837 | orchestrator | 2026-04-09 00:02:34.960849 | orchestrator | # terraform_data.image will be created 2026-04-09 00:02:34.960854 | orchestrator | + resource "terraform_data" "image" { 2026-04-09 00:02:34.960858 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.960862 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 00:02:34.960895 | orchestrator | + output = (known after apply) 2026-04-09 00:02:34.960900 | orchestrator | } 2026-04-09 00:02:34.961006 | orchestrator | 2026-04-09 00:02:34.961046 | orchestrator | # terraform_data.image_node will be created 2026-04-09 00:02:34.961058 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-09 00:02:34.961069 | orchestrator | + id = (known after apply) 2026-04-09 00:02:34.961073 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 00:02:34.961077 | orchestrator | + output = (known after apply) 2026-04-09 00:02:34.961096 | orchestrator | } 2026-04-09 00:02:34.961112 | orchestrator | 2026-04-09 00:02:34.961117 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-09 00:02:34.961143 | orchestrator | 2026-04-09 00:02:34.961148 | orchestrator | Changes to Outputs: 2026-04-09 00:02:34.961158 | orchestrator | + manager_address = (sensitive value) 2026-04-09 00:02:34.961163 | orchestrator | + private_key = (sensitive value) 2026-04-09 00:02:35.444481 | orchestrator | terraform_data.image: Creating... 2026-04-09 00:02:35.444881 | orchestrator | terraform_data.image: Creation complete after 0s [id=11a23f38-e888-4e44-2505-d4ed9d6e2384] 2026-04-09 00:02:35.581892 | orchestrator | terraform_data.image_node: Creating... 2026-04-09 00:02:35.582238 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=cd152f56-94ac-77ff-9ae1-0336c38f5ce3] 2026-04-09 00:02:35.612679 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-09 00:02:35.613116 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-09 00:02:35.633665 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-09 00:02:35.635180 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-09 00:02:35.635859 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-09 00:02:35.654677 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-09 00:02:35.655077 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-09 00:02:35.655483 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-09 00:02:35.656326 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-09 00:02:35.658053 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-09 00:02:36.115848 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 00:02:36.118286 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 00:02:36.123197 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-09 00:02:36.125263 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-09 00:02:36.275785 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-09 00:02:36.280004 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-09 00:02:36.868451 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=937b565a-6446-4a1d-9c82-901b5c599ef8] 2026-04-09 00:02:36.876660 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-09 00:02:39.323713 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=c3f900a3-fa00-488b-a223-0b2f981ffe7d] 2026-04-09 00:02:39.323824 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=03bd35e9-2f61-41d7-a9bf-42d58136cbb2] 2026-04-09 00:02:39.330150 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-09 00:02:39.331647 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-09 00:02:39.338962 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=2bf0158d-8270-4c58-8b9a-c2cfdc8e7669] 2026-04-09 00:02:39.346488 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-09 00:02:39.360146 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=5d65a6d0-57b2-4d69-9f6c-8b44337eed1f] 2026-04-09 00:02:39.362580 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=86513dfb-f28c-4b30-a867-1cbb67da9299] 2026-04-09 00:02:39.366246 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-09 00:02:39.367005 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-09 00:02:39.416543 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=47390fa5-1f85-4c3c-be39-aeec9b514289] 2026-04-09 00:02:39.425667 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-09 00:02:39.440396 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=81032de5-e928-481e-b1b2-e1c42e1209c2] 2026-04-09 00:02:39.454772 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-09 00:02:39.463873 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=933c94da17ccb594ca63a4670534c5ffe846cb88] 2026-04-09 00:02:39.477022 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-09 00:02:39.480713 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=59d5e8e2b6f4cc2c8d43c1c9e405ab99c58dd2cb] 2026-04-09 00:02:39.486988 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-09 00:02:39.505801 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=32d367e8-aaa8-48f2-9cf3-723daee201fb] 2026-04-09 00:02:39.549951 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965] 2026-04-09 00:02:40.229554 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=8a63ed1c-9d65-47b3-a05d-98e5e45fbc34] 2026-04-09 00:02:40.594288 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=5d1fd24b-94f5-4f99-adb2-27e0129f5b61] 2026-04-09 00:02:40.594333 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-09 00:02:42.811504 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=364cd792-88d4-4656-84ac-42e5adfc1168] 2026-04-09 00:02:42.825371 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=b6aa33c0-a4a8-450a-bdfd-eaf334278fb9] 2026-04-09 00:02:42.835760 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=15a5a911-370b-4f88-b9e4-bb1166596610] 2026-04-09 00:02:42.856340 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=8a1bf46f-1fd6-404c-9afe-46a88b051d7c] 2026-04-09 00:02:42.915566 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=a4188790-ce2f-4f5e-b379-255b1854dd65] 2026-04-09 00:02:42.938946 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8097ecb5-3f2b-41e0-a4d6-6074ccb6335b] 2026-04-09 00:02:43.471053 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=1f04aa09-948b-49a7-a8fa-d13002067514] 2026-04-09 00:02:43.481103 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-09 00:02:43.482286 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-09 00:02:43.484392 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-09 00:02:43.716621 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=b4b044d0-eb22-4f2d-bade-a1083d637276] 2026-04-09 00:02:43.732163 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-09 00:02:43.732508 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-09 00:02:43.732710 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-09 00:02:43.734479 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-09 00:02:43.734592 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-09 00:02:43.734858 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-09 00:02:43.736892 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-09 00:02:43.738866 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-09 00:02:43.909818 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=e8ed6b52-fd57-45eb-a9b3-4f4dec220a1d] 2026-04-09 00:02:43.921616 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-09 00:02:44.235461 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=b41aae66-2ffc-4e93-a66c-2aea87479a33] 2026-04-09 00:02:44.245220 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=073cb531-3350-4a01-bd07-62db277fbcc1] 2026-04-09 00:02:44.247495 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-09 00:02:44.252016 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-09 00:02:44.433741 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=f22127ec-dc47-479a-b1df-94e24f8933f6] 2026-04-09 00:02:44.439911 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-09 00:02:44.627000 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=1c11d953-e26f-4274-a396-59869e9b5f2e] 2026-04-09 00:02:44.635997 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-09 00:02:44.740270 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=dcfea8b6-40b3-4b4d-8637-e26cc6dceb82] 2026-04-09 00:02:44.744379 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-09 00:02:44.769003 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=f314979e-5950-4945-ac2e-15bfe53a052d] 2026-04-09 00:02:44.775384 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-09 00:02:44.852952 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=08c8974c-720b-424d-ab48-b942486f7fa0] 2026-04-09 00:02:44.859470 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-09 00:02:45.099960 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=bb57510e-fd67-4453-8c8f-be36ecd21c6b] 2026-04-09 00:02:45.425386 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=6bb8db8e-b4a7-4dbb-b37a-b7cd567c4458] 2026-04-09 00:02:45.506784 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=a5d66f83-4790-4545-9f34-d68352d06404] 2026-04-09 00:02:45.513287 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=e930c737-c01c-4952-b766-31e3c71b8261] 2026-04-09 00:02:45.565260 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=5d04def5-83ec-4534-8bf2-536fde58e93c] 2026-04-09 00:02:45.984520 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=d881bb6e-6de5-4299-8b6f-3d924426a679] 2026-04-09 00:02:46.225993 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=adb56055-6a5a-4e56-abc4-45635d343509] 2026-04-09 00:02:46.435803 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=158a26bb-f0f2-420a-99b6-5284fbaab643] 2026-04-09 00:02:46.695724 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 3s [id=f660fee7-1790-4064-b35e-2ce263c95560] 2026-04-09 00:02:47.107923 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=8e8f2744-e0dd-41d9-9946-669e185b0035] 2026-04-09 00:02:47.129606 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-09 00:02:47.141805 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-09 00:02:47.144037 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-09 00:02:47.144242 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-09 00:02:47.156878 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-09 00:02:47.167887 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-09 00:02:47.168938 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-09 00:02:48.869938 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=8378cc15-6bf5-48d8-b913-7f2d1db32eef] 2026-04-09 00:02:48.877404 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-09 00:02:48.883129 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-09 00:02:48.883755 | orchestrator | local_file.inventory: Creating... 2026-04-09 00:02:48.886480 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c1201441522c5fb3ed00252f5b643214316ee4e9] 2026-04-09 00:02:48.887264 | orchestrator | local_file.inventory: Creation complete after 0s [id=1393370d6faf6eebeb535b2454e18b81f47eecd2] 2026-04-09 00:02:50.414418 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=8378cc15-6bf5-48d8-b913-7f2d1db32eef] 2026-04-09 00:02:57.147681 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-09 00:02:57.147798 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-09 00:02:57.147827 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-09 00:02:57.158110 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-09 00:02:57.168315 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-09 00:02:57.169432 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-09 00:03:07.148252 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-09 00:03:07.148351 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-09 00:03:07.148360 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-09 00:03:07.158729 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-09 00:03:07.168972 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-09 00:03:07.170085 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-09 00:03:17.157883 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-09 00:03:17.157992 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-09 00:03:17.158011 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-09 00:03:17.159261 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-09 00:03:17.169975 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-09 00:03:17.171213 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-09 00:03:27.167403 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-09 00:03:27.167483 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-09 00:03:27.167489 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-09 00:03:27.167526 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-09 00:03:27.170807 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-09 00:03:27.171854 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-09 00:03:28.008161 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=55dc44f7-edb3-4412-912a-06989f7e9e00] 2026-04-09 00:03:37.176761 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-09 00:03:37.176868 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-09 00:03:37.176878 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-04-09 00:03:37.176886 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-09 00:03:37.176893 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-04-09 00:03:38.149350 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=62ae5481-d62d-4863-bb2a-1913f7c63b36] 2026-04-09 00:03:38.213547 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=775d9f67-5395-45da-8397-b260d96d220e] 2026-04-09 00:03:38.217777 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=390941bb-c2e9-4a7b-b369-12beb2bf251a] 2026-04-09 00:03:38.261286 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=4b060aab-fd2d-45c6-9865-ac420f00969f] 2026-04-09 00:03:38.397792 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=18cff649-b731-45b2-9bab-75f217eb48ac] 2026-04-09 00:03:38.427278 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-09 00:03:38.439061 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4049547039824041219] 2026-04-09 00:03:38.439291 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-09 00:03:38.441509 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-09 00:03:38.441599 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-09 00:03:38.452923 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-09 00:03:38.453696 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-09 00:03:38.454465 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-09 00:03:38.468565 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-09 00:03:38.481440 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-09 00:03:38.504505 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-09 00:03:38.507989 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-09 00:03:41.852441 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=55dc44f7-edb3-4412-912a-06989f7e9e00/2bf0158d-8270-4c58-8b9a-c2cfdc8e7669] 2026-04-09 00:03:41.857748 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=775d9f67-5395-45da-8397-b260d96d220e/c3f900a3-fa00-488b-a223-0b2f981ffe7d] 2026-04-09 00:03:42.242556 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=62ae5481-d62d-4863-bb2a-1913f7c63b36/aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965] 2026-04-09 00:03:43.281670 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=775d9f67-5395-45da-8397-b260d96d220e/03bd35e9-2f61-41d7-a9bf-42d58136cbb2] 2026-04-09 00:03:43.326652 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=775d9f67-5395-45da-8397-b260d96d220e/47390fa5-1f85-4c3c-be39-aeec9b514289] 2026-04-09 00:03:43.374396 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=55dc44f7-edb3-4412-912a-06989f7e9e00/5d65a6d0-57b2-4d69-9f6c-8b44337eed1f] 2026-04-09 00:03:43.412970 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=55dc44f7-edb3-4412-912a-06989f7e9e00/81032de5-e928-481e-b1b2-e1c42e1209c2] 2026-04-09 00:03:48.327554 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=62ae5481-d62d-4863-bb2a-1913f7c63b36/32d367e8-aaa8-48f2-9cf3-723daee201fb] 2026-04-09 00:03:48.379286 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=62ae5481-d62d-4863-bb2a-1913f7c63b36/86513dfb-f28c-4b30-a867-1cbb67da9299] 2026-04-09 00:03:48.505150 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-09 00:03:58.514484 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-09 00:03:58.991008 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f9da7520-cc00-4fc0-a88d-61e5162d8a89] 2026-04-09 00:03:59.945975 | orchestrator | 2026-04-09 00:03:59.946099 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-09 00:03:59.946113 | orchestrator | 2026-04-09 00:03:59.946121 | orchestrator | Outputs: 2026-04-09 00:03:59.946129 | orchestrator | 2026-04-09 00:03:59.946146 | orchestrator | manager_address = 2026-04-09 00:03:59.946153 | orchestrator | private_key = 2026-04-09 00:04:00.153618 | orchestrator | ok: Runtime: 0:01:30.709869 2026-04-09 00:04:00.183199 | 2026-04-09 00:04:00.183339 | TASK [Create infrastructure (stable)] 2026-04-09 00:04:00.720702 | orchestrator | skipping: Conditional result was False 2026-04-09 00:04:00.731995 | 2026-04-09 00:04:00.732129 | TASK [Fetch manager address] 2026-04-09 00:04:01.278079 | orchestrator | ok 2026-04-09 00:04:01.290552 | 2026-04-09 00:04:01.290708 | TASK [Set manager_host address] 2026-04-09 00:04:01.391477 | orchestrator | ok 2026-04-09 00:04:01.405302 | 2026-04-09 00:04:01.405442 | LOOP [Update ansible collections] 2026-04-09 00:04:02.513914 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:04:02.514347 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 00:04:02.514406 | orchestrator | Starting galaxy collection install process 2026-04-09 00:04:02.514440 | orchestrator | Process install dependency map 2026-04-09 00:04:02.514471 | orchestrator | Starting collection install process 2026-04-09 00:04:02.514499 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-04-09 00:04:02.514534 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-04-09 00:04:02.514574 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-09 00:04:02.514639 | orchestrator | ok: Item: commons Runtime: 0:00:00.684657 2026-04-09 00:04:03.623194 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 00:04:03.623322 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:04:03.623354 | orchestrator | Starting galaxy collection install process 2026-04-09 00:04:03.623377 | orchestrator | Process install dependency map 2026-04-09 00:04:03.623399 | orchestrator | Starting collection install process 2026-04-09 00:04:03.623420 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-04-09 00:04:03.623441 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-04-09 00:04:03.623460 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-09 00:04:03.623494 | orchestrator | ok: Item: services Runtime: 0:00:00.767955 2026-04-09 00:04:03.640412 | 2026-04-09 00:04:03.640573 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 00:04:14.296699 | orchestrator | ok 2026-04-09 00:04:14.307524 | 2026-04-09 00:04:14.307636 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 00:05:14.346952 | orchestrator | ok 2026-04-09 00:05:14.357311 | 2026-04-09 00:05:14.357455 | TASK [Fetch manager ssh hostkey] 2026-04-09 00:05:15.952752 | orchestrator | Output suppressed because no_log was given 2026-04-09 00:05:15.973473 | 2026-04-09 00:05:15.973635 | TASK [Get ssh keypair from terraform environment] 2026-04-09 00:05:16.559543 | orchestrator | ok: Runtime: 0:00:00.006147 2026-04-09 00:05:16.567366 | 2026-04-09 00:05:16.567489 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 00:05:16.605319 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-09 00:05:16.612615 | 2026-04-09 00:05:16.612754 | TASK [Run manager part 0] 2026-04-09 00:05:17.795138 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:05:17.848154 | orchestrator | 2026-04-09 00:05:17.848237 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-09 00:05:17.848249 | orchestrator | 2026-04-09 00:05:17.848270 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-09 00:05:19.567690 | orchestrator | ok: [testbed-manager] 2026-04-09 00:05:19.567753 | orchestrator | 2026-04-09 00:05:19.567776 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 00:05:19.567786 | orchestrator | 2026-04-09 00:05:19.567794 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:05:21.469445 | orchestrator | ok: [testbed-manager] 2026-04-09 00:05:21.469529 | orchestrator | 2026-04-09 00:05:21.469546 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 00:05:22.136571 | orchestrator | ok: [testbed-manager] 2026-04-09 00:05:22.136610 | orchestrator | 2026-04-09 00:05:22.136619 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 00:05:22.179824 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:05:22.179863 | orchestrator | 2026-04-09 00:05:22.179871 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-09 00:05:22.205378 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:05:22.205429 | orchestrator | 2026-04-09 00:05:22.205440 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-09 00:05:22.231573 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:05:22.231619 | orchestrator | 2026-04-09 00:05:22.231628 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-09 00:05:22.885445 | orchestrator | changed: [testbed-manager] 2026-04-09 00:05:22.885514 | orchestrator | 2026-04-09 00:05:22.885525 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-09 00:08:06.121180 | orchestrator | changed: [testbed-manager] 2026-04-09 00:08:06.121259 | orchestrator | 2026-04-09 00:08:06.121276 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 00:09:22.051891 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:22.052019 | orchestrator | 2026-04-09 00:09:22.052041 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-09 00:09:42.114775 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:42.115034 | orchestrator | 2026-04-09 00:09:42.115058 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-09 00:09:50.577556 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:50.577605 | orchestrator | 2026-04-09 00:09:50.577615 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 00:09:50.630051 | orchestrator | ok: [testbed-manager] 2026-04-09 00:09:50.630093 | orchestrator | 2026-04-09 00:09:50.630104 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-09 00:09:51.450293 | orchestrator | ok: [testbed-manager] 2026-04-09 00:09:51.450503 | orchestrator | 2026-04-09 00:09:51.450523 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-09 00:09:52.174740 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:52.174956 | orchestrator | 2026-04-09 00:09:52.174990 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-09 00:09:58.396004 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:58.396092 | orchestrator | 2026-04-09 00:09:58.396110 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-09 00:10:04.109506 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:04.109566 | orchestrator | 2026-04-09 00:10:04.109572 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-09 00:10:06.683495 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:06.683605 | orchestrator | 2026-04-09 00:10:06.683622 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-09 00:10:08.359610 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:08.359672 | orchestrator | 2026-04-09 00:10:08.359685 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-09 00:10:09.385747 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 00:10:09.385910 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 00:10:09.385929 | orchestrator | 2026-04-09 00:10:09.385945 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-09 00:10:09.430343 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 00:10:09.430416 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 00:10:09.430431 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 00:10:09.430446 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 00:10:16.897068 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 00:10:16.897118 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 00:10:16.897127 | orchestrator | 2026-04-09 00:10:16.897134 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-09 00:10:17.455300 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:17.455419 | orchestrator | 2026-04-09 00:10:17.455436 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-09 00:13:40.554353 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-09 00:13:40.554404 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-09 00:13:40.554413 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-09 00:13:40.554420 | orchestrator | 2026-04-09 00:13:40.554428 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-09 00:13:42.749766 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-09 00:13:42.749824 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-09 00:13:42.749831 | orchestrator | 2026-04-09 00:13:42.749839 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-09 00:13:42.749844 | orchestrator | 2026-04-09 00:13:42.749849 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:13:44.097459 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:44.097497 | orchestrator | 2026-04-09 00:13:44.097503 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 00:13:44.155813 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:44.155858 | orchestrator | 2026-04-09 00:13:44.155867 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 00:13:44.227111 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:44.227158 | orchestrator | 2026-04-09 00:13:44.227168 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 00:13:44.994802 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:44.995010 | orchestrator | 2026-04-09 00:13:44.995032 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 00:13:45.703502 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:45.703599 | orchestrator | 2026-04-09 00:13:45.703615 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 00:13:47.053596 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-09 00:13:47.053640 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-09 00:13:47.053648 | orchestrator | 2026-04-09 00:13:47.053656 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 00:13:48.402958 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:48.403039 | orchestrator | 2026-04-09 00:13:48.403053 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 00:13:50.085908 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:13:50.086004 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-09 00:13:50.086066 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:13:50.086081 | orchestrator | 2026-04-09 00:13:50.086094 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 00:13:50.144001 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:50.144055 | orchestrator | 2026-04-09 00:13:50.144061 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 00:13:50.212169 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:50.212221 | orchestrator | 2026-04-09 00:13:50.212227 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 00:13:50.753630 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:50.753670 | orchestrator | 2026-04-09 00:13:50.753676 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 00:13:50.821739 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:50.821779 | orchestrator | 2026-04-09 00:13:50.821785 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 00:13:51.695245 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:13:51.695331 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:51.695341 | orchestrator | 2026-04-09 00:13:51.695347 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 00:13:51.725699 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:51.725733 | orchestrator | 2026-04-09 00:13:51.725740 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 00:13:51.753670 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:51.753705 | orchestrator | 2026-04-09 00:13:51.753711 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 00:13:51.778689 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:51.778731 | orchestrator | 2026-04-09 00:13:51.778737 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 00:13:51.851022 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:51.851065 | orchestrator | 2026-04-09 00:13:51.851071 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 00:13:52.559350 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:52.559439 | orchestrator | 2026-04-09 00:13:52.559457 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 00:13:52.559469 | orchestrator | 2026-04-09 00:13:52.559482 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:13:53.955384 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:53.955477 | orchestrator | 2026-04-09 00:13:53.955492 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-09 00:13:54.897486 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:54.897555 | orchestrator | 2026-04-09 00:13:54.897564 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:13:54.897578 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-09 00:13:54.897585 | orchestrator | 2026-04-09 00:13:55.459357 | orchestrator | ok: Runtime: 0:08:38.065168 2026-04-09 00:13:55.468590 | 2026-04-09 00:13:55.468711 | TASK [Point out that the log in on the manager is now possible] 2026-04-09 00:13:55.498752 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-09 00:13:55.506070 | 2026-04-09 00:13:55.506220 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 00:13:55.538664 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-09 00:13:55.546863 | 2026-04-09 00:13:55.546983 | TASK [Run manager part 1 + 2] 2026-04-09 00:13:56.917940 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:13:56.972677 | orchestrator | 2026-04-09 00:13:56.972764 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-09 00:13:56.972782 | orchestrator | 2026-04-09 00:13:56.972810 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:14:00.085545 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:00.085644 | orchestrator | 2026-04-09 00:14:00.085712 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-09 00:14:00.122453 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:14:00.122538 | orchestrator | 2026-04-09 00:14:00.122557 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 00:14:00.158376 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:00.158431 | orchestrator | 2026-04-09 00:14:00.158440 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:14:00.191115 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:00.191167 | orchestrator | 2026-04-09 00:14:00.191174 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:14:00.259411 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:00.259496 | orchestrator | 2026-04-09 00:14:00.259515 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:14:00.324380 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:00.324461 | orchestrator | 2026-04-09 00:14:00.324478 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:14:00.383982 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-09 00:14:00.384064 | orchestrator | 2026-04-09 00:14:00.384079 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:14:01.173806 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:01.175035 | orchestrator | 2026-04-09 00:14:01.175051 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:14:01.219801 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:14:01.219863 | orchestrator | 2026-04-09 00:14:01.219878 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:14:02.692244 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:02.692371 | orchestrator | 2026-04-09 00:14:02.692400 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:14:03.316310 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:03.316375 | orchestrator | 2026-04-09 00:14:03.316392 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:14:04.588121 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:04.588160 | orchestrator | 2026-04-09 00:14:04.588170 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:14:21.642445 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:21.642546 | orchestrator | 2026-04-09 00:14:21.642574 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 00:14:22.300102 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:22.300189 | orchestrator | 2026-04-09 00:14:22.300205 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 00:14:22.346109 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:14:22.346144 | orchestrator | 2026-04-09 00:14:22.346150 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-09 00:14:23.287904 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:23.288854 | orchestrator | 2026-04-09 00:14:23.288919 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-09 00:14:24.206313 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:24.206369 | orchestrator | 2026-04-09 00:14:24.206377 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-09 00:14:24.736619 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:24.736676 | orchestrator | 2026-04-09 00:14:24.736690 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-09 00:14:24.779525 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 00:14:24.779653 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 00:14:24.779676 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 00:14:24.779696 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 00:14:26.900577 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:26.900741 | orchestrator | 2026-04-09 00:14:26.900750 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-09 00:14:35.844209 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-09 00:14:35.844408 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-09 00:14:35.844428 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-09 00:14:35.844440 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-09 00:14:35.844461 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-09 00:14:35.844472 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-09 00:14:35.844483 | orchestrator | 2026-04-09 00:14:35.844495 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-09 00:14:36.884183 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:36.884290 | orchestrator | 2026-04-09 00:14:36.884301 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-09 00:14:39.893968 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:39.894102 | orchestrator | 2026-04-09 00:14:39.894124 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-09 00:14:39.937552 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:14:39.937635 | orchestrator | 2026-04-09 00:14:39.937650 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-09 00:16:14.051504 | orchestrator | changed: [testbed-manager] 2026-04-09 00:16:14.051554 | orchestrator | 2026-04-09 00:16:14.051563 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:16:15.149428 | orchestrator | ok: [testbed-manager] 2026-04-09 00:16:15.150332 | orchestrator | 2026-04-09 00:16:15.150350 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:16:15.150356 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-09 00:16:15.150361 | orchestrator | 2026-04-09 00:16:15.679217 | orchestrator | ok: Runtime: 0:02:19.407503 2026-04-09 00:16:15.700110 | 2026-04-09 00:16:15.700312 | TASK [Reboot manager] 2026-04-09 00:16:17.243287 | orchestrator | ok: Runtime: 0:00:00.921988 2026-04-09 00:16:17.258547 | 2026-04-09 00:16:17.258734 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 00:16:33.648201 | orchestrator | ok 2026-04-09 00:16:33.658557 | 2026-04-09 00:16:33.658697 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 00:17:33.704089 | orchestrator | ok 2026-04-09 00:17:33.714306 | 2026-04-09 00:17:33.714441 | TASK [Deploy manager + bootstrap nodes] 2026-04-09 00:17:35.871503 | orchestrator | 2026-04-09 00:17:35.871699 | orchestrator | # DEPLOY MANAGER 2026-04-09 00:17:35.871723 | orchestrator | 2026-04-09 00:17:35.871738 | orchestrator | + set -e 2026-04-09 00:17:35.871752 | orchestrator | + echo 2026-04-09 00:17:35.871767 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-09 00:17:35.871785 | orchestrator | + echo 2026-04-09 00:17:35.871835 | orchestrator | + cat /opt/manager-vars.sh 2026-04-09 00:17:35.874477 | orchestrator | export NUMBER_OF_NODES=6 2026-04-09 00:17:35.874506 | orchestrator | 2026-04-09 00:17:35.874519 | orchestrator | export CEPH_VERSION=reef 2026-04-09 00:17:35.874532 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-09 00:17:35.874545 | orchestrator | export MANAGER_VERSION=latest 2026-04-09 00:17:35.874568 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-04-09 00:17:35.874579 | orchestrator | 2026-04-09 00:17:35.874597 | orchestrator | export ARA=false 2026-04-09 00:17:35.874609 | orchestrator | export DEPLOY_MODE=manager 2026-04-09 00:17:35.874627 | orchestrator | export TEMPEST=true 2026-04-09 00:17:35.874639 | orchestrator | export IS_ZUUL=true 2026-04-09 00:17:35.874650 | orchestrator | 2026-04-09 00:17:35.874668 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 00:17:35.874680 | orchestrator | export EXTERNAL_API=false 2026-04-09 00:17:35.874691 | orchestrator | 2026-04-09 00:17:35.874702 | orchestrator | export IMAGE_USER=ubuntu 2026-04-09 00:17:35.874717 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:35.874728 | orchestrator | 2026-04-09 00:17:35.874739 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-09 00:17:35.874798 | orchestrator | 2026-04-09 00:17:35.874812 | orchestrator | + echo 2026-04-09 00:17:35.874825 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:17:35.875909 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:17:35.875938 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:17:35.875949 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:17:35.875962 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:17:35.876188 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:17:35.876204 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:17:35.876215 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:17:35.876226 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 00:17:35.876237 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 00:17:35.876248 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:17:35.876265 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:17:35.876276 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 00:17:35.876287 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 00:17:35.876298 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-09 00:17:35.876321 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-09 00:17:35.876333 | orchestrator | ++ export ARA=false 2026-04-09 00:17:35.876345 | orchestrator | ++ ARA=false 2026-04-09 00:17:35.876360 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:17:35.876371 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:17:35.876382 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:17:35.876393 | orchestrator | ++ TEMPEST=true 2026-04-09 00:17:35.876404 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:17:35.876415 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:17:35.876432 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 00:17:35.876443 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 00:17:35.876457 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:17:35.876469 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:17:35.876480 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:17:35.876490 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:17:35.876502 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:35.876512 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:35.876523 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:17:35.876534 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:17:35.876546 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-09 00:17:35.924317 | orchestrator | + docker version 2026-04-09 00:17:36.032753 | orchestrator | Client: Docker Engine - Community 2026-04-09 00:17:36.032828 | orchestrator | Version: 27.5.1 2026-04-09 00:17:36.032836 | orchestrator | API version: 1.47 2026-04-09 00:17:36.032842 | orchestrator | Go version: go1.22.11 2026-04-09 00:17:36.032846 | orchestrator | Git commit: 9f9e405 2026-04-09 00:17:36.032850 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 00:17:36.032855 | orchestrator | OS/Arch: linux/amd64 2026-04-09 00:17:36.032859 | orchestrator | Context: default 2026-04-09 00:17:36.032863 | orchestrator | 2026-04-09 00:17:36.032868 | orchestrator | Server: Docker Engine - Community 2026-04-09 00:17:36.032872 | orchestrator | Engine: 2026-04-09 00:17:36.032876 | orchestrator | Version: 27.5.1 2026-04-09 00:17:36.032880 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-09 00:17:36.032903 | orchestrator | Go version: go1.22.11 2026-04-09 00:17:36.032907 | orchestrator | Git commit: 4c9b3b0 2026-04-09 00:17:36.032911 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 00:17:36.032915 | orchestrator | OS/Arch: linux/amd64 2026-04-09 00:17:36.032919 | orchestrator | Experimental: false 2026-04-09 00:17:36.032923 | orchestrator | containerd: 2026-04-09 00:17:36.032927 | orchestrator | Version: v2.2.2 2026-04-09 00:17:36.032931 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-09 00:17:36.032935 | orchestrator | runc: 2026-04-09 00:17:36.032939 | orchestrator | Version: 1.3.4 2026-04-09 00:17:36.032943 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-09 00:17:36.032947 | orchestrator | docker-init: 2026-04-09 00:17:36.032951 | orchestrator | Version: 0.19.0 2026-04-09 00:17:36.032955 | orchestrator | GitCommit: de40ad0 2026-04-09 00:17:36.035181 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-09 00:17:36.043227 | orchestrator | + set -e 2026-04-09 00:17:36.043265 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:17:36.043270 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:17:36.043276 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:17:36.043280 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 00:17:36.043284 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 00:17:36.043288 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:17:36.043293 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:17:36.043297 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 00:17:36.043301 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 00:17:36.043305 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-09 00:17:36.043308 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-09 00:17:36.043312 | orchestrator | ++ export ARA=false 2026-04-09 00:17:36.043316 | orchestrator | ++ ARA=false 2026-04-09 00:17:36.043320 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:17:36.043324 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:17:36.043328 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:17:36.043332 | orchestrator | ++ TEMPEST=true 2026-04-09 00:17:36.043336 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:17:36.043339 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:17:36.043343 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 00:17:36.043347 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 00:17:36.043351 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:17:36.043354 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:17:36.043358 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:17:36.043362 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:17:36.043366 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:36.043370 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:36.043373 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:17:36.043377 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:17:36.043381 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:17:36.043385 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:17:36.043389 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:17:36.043392 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:17:36.043400 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:17:36.043404 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 00:17:36.043412 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:17:36.043416 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-09 00:17:36.047736 | orchestrator | + set -e 2026-04-09 00:17:36.047748 | orchestrator | + VERSION=reef 2026-04-09 00:17:36.048719 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:36.054245 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-09 00:17:36.054270 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:36.059527 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-04-09 00:17:36.065301 | orchestrator | + set -e 2026-04-09 00:17:36.065385 | orchestrator | + VERSION=2025.1 2026-04-09 00:17:36.065690 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:36.069159 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-09 00:17:36.069205 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:36.073934 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-09 00:17:36.074921 | orchestrator | ++ semver latest 7.0.0 2026-04-09 00:17:36.130844 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:17:36.130940 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:17:36.130955 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-09 00:17:36.131778 | orchestrator | ++ semver latest 10.0.0-0 2026-04-09 00:17:36.184719 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:17:36.185099 | orchestrator | ++ semver 2025.1 2025.1 2026-04-09 00:17:36.250684 | orchestrator | + [[ 0 -ge 0 ]] 2026-04-09 00:17:36.250789 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-09 00:17:36.256615 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-09 00:17:36.261071 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-09 00:17:36.342074 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:17:36.343102 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 00:17:36.344076 | orchestrator | ++ deactivate nondestructive 2026-04-09 00:17:36.344249 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:36.344263 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:36.344275 | orchestrator | ++ hash -r 2026-04-09 00:17:36.344287 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:36.344298 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 00:17:36.344309 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 00:17:36.344327 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 00:17:36.344345 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 00:17:36.344356 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 00:17:36.344367 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 00:17:36.344378 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 00:17:36.344405 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:17:36.344446 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:17:36.344482 | orchestrator | ++ export PATH 2026-04-09 00:17:36.344808 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:36.344833 | orchestrator | ++ '[' -z '' ']' 2026-04-09 00:17:36.344851 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 00:17:36.344869 | orchestrator | ++ PS1='(venv) ' 2026-04-09 00:17:36.344888 | orchestrator | ++ export PS1 2026-04-09 00:17:36.344905 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 00:17:36.344932 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 00:17:36.344953 | orchestrator | ++ hash -r 2026-04-09 00:17:36.344978 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-09 00:17:37.335590 | orchestrator | 2026-04-09 00:17:37.779758 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-09 00:17:37.779844 | orchestrator | 2026-04-09 00:17:37.779860 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:17:37.822076 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:37.822183 | orchestrator | 2026-04-09 00:17:37.822196 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 00:17:39.894095 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:39.894233 | orchestrator | 2026-04-09 00:17:39.894254 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-09 00:17:39.894267 | orchestrator | 2026-04-09 00:17:39.894279 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:17:42.137650 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:42.137753 | orchestrator | 2026-04-09 00:17:42.137769 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-09 00:17:42.179991 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:42.180088 | orchestrator | 2026-04-09 00:17:42.180104 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-09 00:17:42.561772 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:42.561930 | orchestrator | 2026-04-09 00:17:42.561978 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-09 00:17:42.587486 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:42.587581 | orchestrator | 2026-04-09 00:17:42.587596 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 00:17:42.876592 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:42.876719 | orchestrator | 2026-04-09 00:17:42.876778 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-09 00:17:43.147892 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:43.148018 | orchestrator | 2026-04-09 00:17:43.148047 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-09 00:17:43.244338 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:43.244427 | orchestrator | 2026-04-09 00:17:43.244441 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-09 00:17:43.244453 | orchestrator | 2026-04-09 00:17:43.244465 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:17:44.787263 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:44.787365 | orchestrator | 2026-04-09 00:17:44.787383 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-09 00:17:44.880718 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-09 00:17:44.880848 | orchestrator | 2026-04-09 00:17:44.880866 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-09 00:17:44.926871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-09 00:17:44.926960 | orchestrator | 2026-04-09 00:17:44.926974 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-09 00:17:45.904926 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-09 00:17:45.905029 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-09 00:17:45.905044 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-09 00:17:45.905057 | orchestrator | 2026-04-09 00:17:45.905072 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-09 00:17:47.489003 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-09 00:17:47.489096 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-09 00:17:47.489187 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-09 00:17:47.489212 | orchestrator | 2026-04-09 00:17:47.489227 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-09 00:17:48.052353 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:17:48.052456 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:48.052474 | orchestrator | 2026-04-09 00:17:48.052486 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-09 00:17:48.693386 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:17:48.693480 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:48.693496 | orchestrator | 2026-04-09 00:17:48.693507 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-09 00:17:48.751322 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:48.751411 | orchestrator | 2026-04-09 00:17:48.751426 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-09 00:17:49.075804 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:49.075889 | orchestrator | 2026-04-09 00:17:49.075902 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-09 00:17:49.136210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-09 00:17:49.136293 | orchestrator | 2026-04-09 00:17:49.136325 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-09 00:17:50.207539 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:50.207647 | orchestrator | 2026-04-09 00:17:50.207664 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-09 00:17:51.046410 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:51.046493 | orchestrator | 2026-04-09 00:17:51.046505 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-09 00:18:00.501249 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:00.501359 | orchestrator | 2026-04-09 00:18:00.501378 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-09 00:18:00.540884 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:00.540971 | orchestrator | 2026-04-09 00:18:00.540986 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-09 00:18:00.541026 | orchestrator | 2026-04-09 00:18:00.541038 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:18:02.124982 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:02.125103 | orchestrator | 2026-04-09 00:18:02.125151 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-09 00:18:02.227389 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-09 00:18:02.227515 | orchestrator | 2026-04-09 00:18:02.227543 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-09 00:18:02.278927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:18:02.279023 | orchestrator | 2026-04-09 00:18:02.279036 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-09 00:18:04.447445 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:04.447543 | orchestrator | 2026-04-09 00:18:04.447560 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-09 00:18:04.497323 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:04.497422 | orchestrator | 2026-04-09 00:18:04.497438 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-09 00:18:04.643473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-09 00:18:04.643582 | orchestrator | 2026-04-09 00:18:04.643608 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-09 00:18:07.191532 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-09 00:18:07.191651 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-09 00:18:07.191666 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-09 00:18:07.191677 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-09 00:18:07.191701 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-09 00:18:07.192492 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-09 00:18:07.192512 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-09 00:18:07.192524 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-09 00:18:07.192534 | orchestrator | 2026-04-09 00:18:07.192546 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-09 00:18:07.773433 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:07.773536 | orchestrator | 2026-04-09 00:18:07.773553 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-09 00:18:08.334275 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:08.334380 | orchestrator | 2026-04-09 00:18:08.334397 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-09 00:18:08.408975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-09 00:18:08.409079 | orchestrator | 2026-04-09 00:18:08.409100 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-09 00:18:09.507968 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-09 00:18:09.508076 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-09 00:18:09.508093 | orchestrator | 2026-04-09 00:18:09.508141 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-09 00:18:10.055709 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:10.055805 | orchestrator | 2026-04-09 00:18:10.055821 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-09 00:18:10.099191 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:10.099304 | orchestrator | 2026-04-09 00:18:10.099320 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-09 00:18:10.172798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-09 00:18:10.172894 | orchestrator | 2026-04-09 00:18:10.172911 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-09 00:18:10.705499 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:10.705662 | orchestrator | 2026-04-09 00:18:10.705692 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-09 00:18:10.760956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-09 00:18:10.761063 | orchestrator | 2026-04-09 00:18:10.761080 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-09 00:18:11.961400 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:18:11.961508 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:18:11.961524 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:11.961538 | orchestrator | 2026-04-09 00:18:11.961550 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-09 00:18:12.495471 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:12.495553 | orchestrator | 2026-04-09 00:18:12.495566 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-09 00:18:12.546437 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:12.546538 | orchestrator | 2026-04-09 00:18:12.546558 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-09 00:18:12.631841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-09 00:18:12.631919 | orchestrator | 2026-04-09 00:18:12.631930 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-09 00:18:13.060216 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:13.060306 | orchestrator | 2026-04-09 00:18:13.060318 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-09 00:18:13.388262 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:13.388354 | orchestrator | 2026-04-09 00:18:13.388370 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-09 00:18:14.398210 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-09 00:18:14.398310 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-09 00:18:14.398329 | orchestrator | 2026-04-09 00:18:14.398343 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-09 00:18:14.913924 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:14.914131 | orchestrator | 2026-04-09 00:18:14.914163 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-09 00:18:15.196274 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:15.196370 | orchestrator | 2026-04-09 00:18:15.196384 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-09 00:18:15.517332 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:15.517430 | orchestrator | 2026-04-09 00:18:15.517446 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-09 00:18:15.553146 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:15.553218 | orchestrator | 2026-04-09 00:18:15.553232 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-09 00:18:15.613840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-09 00:18:15.613933 | orchestrator | 2026-04-09 00:18:15.613949 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-09 00:18:15.655438 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:15.655513 | orchestrator | 2026-04-09 00:18:15.655526 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-09 00:18:17.476889 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-09 00:18:17.476996 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-09 00:18:17.477015 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-09 00:18:17.477028 | orchestrator | 2026-04-09 00:18:17.477040 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-09 00:18:18.173966 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:18.174183 | orchestrator | 2026-04-09 00:18:18.174212 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-09 00:18:18.854316 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:18.854438 | orchestrator | 2026-04-09 00:18:18.854453 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-09 00:18:19.553596 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:19.553697 | orchestrator | 2026-04-09 00:18:19.553714 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-09 00:18:19.620253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-09 00:18:19.620345 | orchestrator | 2026-04-09 00:18:19.620359 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-09 00:18:19.661506 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:19.661601 | orchestrator | 2026-04-09 00:18:19.661616 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-09 00:18:20.336549 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-09 00:18:20.336651 | orchestrator | 2026-04-09 00:18:20.336666 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-09 00:18:20.416820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-09 00:18:20.416928 | orchestrator | 2026-04-09 00:18:20.416948 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-09 00:18:21.099539 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:21.099662 | orchestrator | 2026-04-09 00:18:21.099680 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-09 00:18:21.708395 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:21.708485 | orchestrator | 2026-04-09 00:18:21.708499 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-09 00:18:21.768933 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:21.769013 | orchestrator | 2026-04-09 00:18:21.769027 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-09 00:18:21.830171 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:21.830254 | orchestrator | 2026-04-09 00:18:21.830269 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-09 00:18:22.627823 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:22.627930 | orchestrator | 2026-04-09 00:18:22.627946 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-09 00:19:27.860720 | orchestrator | changed: [testbed-manager] 2026-04-09 00:19:27.860832 | orchestrator | 2026-04-09 00:19:27.860847 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-09 00:19:28.707518 | orchestrator | ok: [testbed-manager] 2026-04-09 00:19:28.707621 | orchestrator | 2026-04-09 00:19:28.707638 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-09 00:19:28.753334 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:19:28.753413 | orchestrator | 2026-04-09 00:19:28.753443 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-09 00:19:31.401426 | orchestrator | changed: [testbed-manager] 2026-04-09 00:19:31.401533 | orchestrator | 2026-04-09 00:19:31.401551 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-09 00:19:31.466652 | orchestrator | ok: [testbed-manager] 2026-04-09 00:19:31.466746 | orchestrator | 2026-04-09 00:19:31.466762 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 00:19:31.466775 | orchestrator | 2026-04-09 00:19:31.466786 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-09 00:19:31.504986 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:19:31.505076 | orchestrator | 2026-04-09 00:19:31.505146 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-09 00:20:31.548494 | orchestrator | Pausing for 60 seconds 2026-04-09 00:20:31.548607 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:31.548623 | orchestrator | 2026-04-09 00:20:31.548637 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-09 00:20:34.976752 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:34.976845 | orchestrator | 2026-04-09 00:20:34.976861 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-09 00:21:16.411778 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-09 00:21:16.411905 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-09 00:21:16.411932 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:16.411953 | orchestrator | 2026-04-09 00:21:16.411973 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-09 00:21:21.996533 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:21.996640 | orchestrator | 2026-04-09 00:21:21.996658 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-09 00:21:22.065254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-09 00:21:22.065348 | orchestrator | 2026-04-09 00:21:22.065364 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 00:21:22.065376 | orchestrator | 2026-04-09 00:21:22.065387 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-09 00:21:22.114896 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:22.114976 | orchestrator | 2026-04-09 00:21:22.114989 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-09 00:21:22.178505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-09 00:21:22.178608 | orchestrator | 2026-04-09 00:21:22.178625 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-09 00:21:22.928952 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:22.929110 | orchestrator | 2026-04-09 00:21:22.929132 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-09 00:21:25.925418 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:25.925521 | orchestrator | 2026-04-09 00:21:25.925540 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-09 00:21:25.995352 | orchestrator | ok: [testbed-manager] => { 2026-04-09 00:21:25.995445 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-09 00:21:25.995461 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-09 00:21:25.995473 | orchestrator | "Checking running containers against expected versions...", 2026-04-09 00:21:25.995486 | orchestrator | "", 2026-04-09 00:21:25.995498 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-09 00:21:25.995509 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-09 00:21:25.995521 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.995532 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-09 00:21:25.995543 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.995555 | orchestrator | "", 2026-04-09 00:21:25.995566 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-09 00:21:25.995577 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-09 00:21:25.995588 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.995599 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-09 00:21:25.995610 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.995621 | orchestrator | "", 2026-04-09 00:21:25.995632 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-09 00:21:25.995643 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-09 00:21:25.995654 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.995665 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-09 00:21:25.995676 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.995687 | orchestrator | "", 2026-04-09 00:21:25.995699 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-09 00:21:25.995710 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-09 00:21:25.995721 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.995732 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-09 00:21:25.995743 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.995781 | orchestrator | "", 2026-04-09 00:21:25.995793 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-09 00:21:25.995804 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-04-09 00:21:25.995815 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.995826 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-04-09 00:21:25.995837 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.995848 | orchestrator | "", 2026-04-09 00:21:25.995859 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-09 00:21:25.995870 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.995881 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.995892 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.995905 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.995919 | orchestrator | "", 2026-04-09 00:21:25.995933 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-09 00:21:25.995946 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 00:21:25.995959 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.995972 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 00:21:25.995984 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.995998 | orchestrator | "", 2026-04-09 00:21:25.996010 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-09 00:21:25.996028 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 00:21:25.996040 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996077 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 00:21:25.996089 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996100 | orchestrator | "", 2026-04-09 00:21:25.996117 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-09 00:21:25.996128 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-09 00:21:25.996140 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996151 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-09 00:21:25.996162 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996173 | orchestrator | "", 2026-04-09 00:21:25.996183 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-09 00:21:25.996195 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 00:21:25.996206 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996217 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 00:21:25.996227 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996238 | orchestrator | "", 2026-04-09 00:21:25.996249 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-09 00:21:25.996260 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996271 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996281 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996292 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996303 | orchestrator | "", 2026-04-09 00:21:25.996314 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-09 00:21:25.996325 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996336 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996346 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996357 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996368 | orchestrator | "", 2026-04-09 00:21:25.996378 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-09 00:21:25.996389 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996400 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996411 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996421 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996432 | orchestrator | "", 2026-04-09 00:21:25.996443 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-09 00:21:25.996462 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996473 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996484 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996495 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996506 | orchestrator | "", 2026-04-09 00:21:25.996517 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-09 00:21:25.996544 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996555 | orchestrator | " Enabled: true", 2026-04-09 00:21:25.996566 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:25.996577 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:25.996588 | orchestrator | "", 2026-04-09 00:21:25.996599 | orchestrator | "=== Summary ===", 2026-04-09 00:21:25.996610 | orchestrator | "Errors (version mismatches): 0", 2026-04-09 00:21:25.996621 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-09 00:21:25.996632 | orchestrator | "", 2026-04-09 00:21:25.996643 | orchestrator | "✅ All running containers match expected versions!" 2026-04-09 00:21:25.996660 | orchestrator | ] 2026-04-09 00:21:25.996678 | orchestrator | } 2026-04-09 00:21:25.996698 | orchestrator | 2026-04-09 00:21:25.996714 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-09 00:21:26.057215 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:26.057303 | orchestrator | 2026-04-09 00:21:26.057317 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:21:26.057333 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-09 00:21:26.057344 | orchestrator | 2026-04-09 00:21:26.147795 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:21:26.147897 | orchestrator | + deactivate 2026-04-09 00:21:26.147915 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 00:21:26.147930 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:21:26.147941 | orchestrator | + export PATH 2026-04-09 00:21:26.147953 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 00:21:26.147965 | orchestrator | + '[' -n '' ']' 2026-04-09 00:21:26.147976 | orchestrator | + hash -r 2026-04-09 00:21:26.147987 | orchestrator | + '[' -n '' ']' 2026-04-09 00:21:26.147999 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 00:21:26.148010 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 00:21:26.148021 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 00:21:26.148032 | orchestrator | + unset -f deactivate 2026-04-09 00:21:26.148044 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-09 00:21:26.153539 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 00:21:26.153600 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 00:21:26.153617 | orchestrator | + local max_attempts=60 2026-04-09 00:21:26.153631 | orchestrator | + local name=ceph-ansible 2026-04-09 00:21:26.153643 | orchestrator | + local attempt_num=1 2026-04-09 00:21:26.154327 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:21:26.186810 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:21:26.186907 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 00:21:26.186933 | orchestrator | + local max_attempts=60 2026-04-09 00:21:26.186953 | orchestrator | + local name=kolla-ansible 2026-04-09 00:21:26.186972 | orchestrator | + local attempt_num=1 2026-04-09 00:21:26.187783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 00:21:26.222832 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:21:26.222912 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 00:21:26.222928 | orchestrator | + local max_attempts=60 2026-04-09 00:21:26.222941 | orchestrator | + local name=osism-ansible 2026-04-09 00:21:26.222952 | orchestrator | + local attempt_num=1 2026-04-09 00:21:26.223322 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 00:21:26.250696 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:21:26.250745 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 00:21:26.250761 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 00:21:26.852475 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-09 00:21:27.011771 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-09 00:21:27.011885 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.011900 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.011912 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-09 00:21:27.011923 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-09 00:21:27.011933 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.011943 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.012019 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-09 00:21:27.012032 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.012042 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-09 00:21:27.012084 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.012095 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-09 00:21:27.012105 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.012114 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-09 00:21:27.012124 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.012134 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-09 00:21:27.016519 | orchestrator | ++ semver latest 7.0.0 2026-04-09 00:21:27.052793 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:21:27.052903 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:21:27.052920 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-09 00:21:27.055399 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-09 00:21:39.504934 | orchestrator | 2026-04-09 00:21:39 | INFO  | Prepare task for execution of resolvconf. 2026-04-09 00:21:39.726646 | orchestrator | 2026-04-09 00:21:39 | INFO  | Task 7d022c33-fa77-4d72-804a-94a65122d147 (resolvconf) was prepared for execution. 2026-04-09 00:21:39.726752 | orchestrator | 2026-04-09 00:21:39 | INFO  | It takes a moment until task 7d022c33-fa77-4d72-804a-94a65122d147 (resolvconf) has been started and output is visible here. 2026-04-09 00:21:52.160424 | orchestrator | 2026-04-09 00:21:52.160545 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-09 00:21:52.160564 | orchestrator | 2026-04-09 00:21:52.160576 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:21:52.160588 | orchestrator | Thursday 09 April 2026 00:21:42 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-04-09 00:21:52.160600 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:52.160614 | orchestrator | 2026-04-09 00:21:52.160625 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 00:21:52.160638 | orchestrator | Thursday 09 April 2026 00:21:46 +0000 (0:00:03.517) 0:00:03.690 ******** 2026-04-09 00:21:52.160649 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:52.160660 | orchestrator | 2026-04-09 00:21:52.160671 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 00:21:52.160682 | orchestrator | Thursday 09 April 2026 00:21:46 +0000 (0:00:00.053) 0:00:03.743 ******** 2026-04-09 00:21:52.160694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-09 00:21:52.160706 | orchestrator | 2026-04-09 00:21:52.160717 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 00:21:52.160739 | orchestrator | Thursday 09 April 2026 00:21:46 +0000 (0:00:00.067) 0:00:03.811 ******** 2026-04-09 00:21:52.160751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:21:52.160762 | orchestrator | 2026-04-09 00:21:52.160773 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 00:21:52.160784 | orchestrator | Thursday 09 April 2026 00:21:46 +0000 (0:00:00.059) 0:00:03.870 ******** 2026-04-09 00:21:52.160794 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:52.160806 | orchestrator | 2026-04-09 00:21:52.160817 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 00:21:52.160827 | orchestrator | Thursday 09 April 2026 00:21:47 +0000 (0:00:01.056) 0:00:04.927 ******** 2026-04-09 00:21:52.160838 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:52.160849 | orchestrator | 2026-04-09 00:21:52.160860 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 00:21:52.160871 | orchestrator | Thursday 09 April 2026 00:21:47 +0000 (0:00:00.051) 0:00:04.978 ******** 2026-04-09 00:21:52.160881 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:52.160892 | orchestrator | 2026-04-09 00:21:52.160903 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 00:21:52.160913 | orchestrator | Thursday 09 April 2026 00:21:48 +0000 (0:00:00.539) 0:00:05.517 ******** 2026-04-09 00:21:52.160924 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:52.160935 | orchestrator | 2026-04-09 00:21:52.160946 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 00:21:52.160960 | orchestrator | Thursday 09 April 2026 00:21:48 +0000 (0:00:00.082) 0:00:05.600 ******** 2026-04-09 00:21:52.160974 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:52.160987 | orchestrator | 2026-04-09 00:21:52.160999 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 00:21:52.161013 | orchestrator | Thursday 09 April 2026 00:21:48 +0000 (0:00:00.555) 0:00:06.156 ******** 2026-04-09 00:21:52.161025 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:52.161038 | orchestrator | 2026-04-09 00:21:52.161085 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 00:21:52.161119 | orchestrator | Thursday 09 April 2026 00:21:49 +0000 (0:00:01.078) 0:00:07.234 ******** 2026-04-09 00:21:52.161132 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:52.161146 | orchestrator | 2026-04-09 00:21:52.161157 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 00:21:52.161168 | orchestrator | Thursday 09 April 2026 00:21:50 +0000 (0:00:00.950) 0:00:08.185 ******** 2026-04-09 00:21:52.161179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-09 00:21:52.161190 | orchestrator | 2026-04-09 00:21:52.161201 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 00:21:52.161211 | orchestrator | Thursday 09 April 2026 00:21:50 +0000 (0:00:00.059) 0:00:08.244 ******** 2026-04-09 00:21:52.161222 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:52.161233 | orchestrator | 2026-04-09 00:21:52.161244 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:21:52.161256 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:21:52.161266 | orchestrator | 2026-04-09 00:21:52.161277 | orchestrator | 2026-04-09 00:21:52.161288 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:21:52.161299 | orchestrator | Thursday 09 April 2026 00:21:51 +0000 (0:00:01.136) 0:00:09.380 ******** 2026-04-09 00:21:52.161309 | orchestrator | =============================================================================== 2026-04-09 00:21:52.161320 | orchestrator | Gathering Facts --------------------------------------------------------- 3.52s 2026-04-09 00:21:52.161331 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-04-09 00:21:52.161342 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2026-04-09 00:21:52.161352 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2026-04-09 00:21:52.161363 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-04-09 00:21:52.161374 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-04-09 00:21:52.161402 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-04-09 00:21:52.161414 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-09 00:21:52.161425 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-04-09 00:21:52.161435 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-04-09 00:21:52.161446 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2026-04-09 00:21:52.161464 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-04-09 00:21:52.161475 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-04-09 00:21:52.320744 | orchestrator | + osism apply sshconfig 2026-04-09 00:22:03.636417 | orchestrator | 2026-04-09 00:22:03 | INFO  | Prepare task for execution of sshconfig. 2026-04-09 00:22:03.708308 | orchestrator | 2026-04-09 00:22:03 | INFO  | Task ecccec51-4fd3-42a2-ad7c-01466a71e4d7 (sshconfig) was prepared for execution. 2026-04-09 00:22:03.708401 | orchestrator | 2026-04-09 00:22:03 | INFO  | It takes a moment until task ecccec51-4fd3-42a2-ad7c-01466a71e4d7 (sshconfig) has been started and output is visible here. 2026-04-09 00:22:14.635694 | orchestrator | 2026-04-09 00:22:14.635816 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-09 00:22:14.635834 | orchestrator | 2026-04-09 00:22:14.635846 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-09 00:22:14.635858 | orchestrator | Thursday 09 April 2026 00:22:06 +0000 (0:00:00.190) 0:00:00.190 ******** 2026-04-09 00:22:14.635894 | orchestrator | ok: [testbed-manager] 2026-04-09 00:22:14.635909 | orchestrator | 2026-04-09 00:22:14.635920 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-09 00:22:14.635932 | orchestrator | Thursday 09 April 2026 00:22:07 +0000 (0:00:00.818) 0:00:01.008 ******** 2026-04-09 00:22:14.635943 | orchestrator | changed: [testbed-manager] 2026-04-09 00:22:14.635954 | orchestrator | 2026-04-09 00:22:14.635965 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-09 00:22:14.635976 | orchestrator | Thursday 09 April 2026 00:22:08 +0000 (0:00:00.545) 0:00:01.554 ******** 2026-04-09 00:22:14.635987 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:22:14.635998 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:22:14.636009 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:22:14.636020 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:22:14.636031 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:22:14.636109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:22:14.636121 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:22:14.636132 | orchestrator | 2026-04-09 00:22:14.636143 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-09 00:22:14.636153 | orchestrator | Thursday 09 April 2026 00:22:13 +0000 (0:00:05.600) 0:00:07.154 ******** 2026-04-09 00:22:14.636164 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:14.636175 | orchestrator | 2026-04-09 00:22:14.636186 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-09 00:22:14.636197 | orchestrator | Thursday 09 April 2026 00:22:13 +0000 (0:00:00.098) 0:00:07.253 ******** 2026-04-09 00:22:14.636209 | orchestrator | changed: [testbed-manager] 2026-04-09 00:22:14.636220 | orchestrator | 2026-04-09 00:22:14.636231 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:22:14.636243 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:22:14.636255 | orchestrator | 2026-04-09 00:22:14.636266 | orchestrator | 2026-04-09 00:22:14.636277 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:22:14.636288 | orchestrator | Thursday 09 April 2026 00:22:14 +0000 (0:00:00.545) 0:00:07.798 ******** 2026-04-09 00:22:14.636299 | orchestrator | =============================================================================== 2026-04-09 00:22:14.636310 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.60s 2026-04-09 00:22:14.636320 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.82s 2026-04-09 00:22:14.636331 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-04-09 00:22:14.636342 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-04-09 00:22:14.636353 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-04-09 00:22:14.794302 | orchestrator | + osism apply known-hosts 2026-04-09 00:22:26.082473 | orchestrator | 2026-04-09 00:22:26 | INFO  | Prepare task for execution of known-hosts. 2026-04-09 00:22:26.169112 | orchestrator | 2026-04-09 00:22:26 | INFO  | Task 15ce4c9b-fcac-4d1c-9c68-ba810538fb07 (known-hosts) was prepared for execution. 2026-04-09 00:22:26.169209 | orchestrator | 2026-04-09 00:22:26 | INFO  | It takes a moment until task 15ce4c9b-fcac-4d1c-9c68-ba810538fb07 (known-hosts) has been started and output is visible here. 2026-04-09 00:22:41.354152 | orchestrator | 2026-04-09 00:22:41.354268 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-09 00:22:41.354278 | orchestrator | 2026-04-09 00:22:41.354284 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-09 00:22:41.354290 | orchestrator | Thursday 09 April 2026 00:22:29 +0000 (0:00:00.192) 0:00:00.192 ******** 2026-04-09 00:22:41.354311 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:22:41.354317 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:22:41.354322 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:22:41.354327 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:22:41.354332 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:22:41.354337 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:22:41.354349 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:22:41.354355 | orchestrator | 2026-04-09 00:22:41.354360 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-09 00:22:41.354367 | orchestrator | Thursday 09 April 2026 00:22:35 +0000 (0:00:06.367) 0:00:06.560 ******** 2026-04-09 00:22:41.354373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 00:22:41.354380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 00:22:41.354386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 00:22:41.354391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 00:22:41.354396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 00:22:41.354401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 00:22:41.354406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 00:22:41.354411 | orchestrator | 2026-04-09 00:22:41.354416 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:41.354422 | orchestrator | Thursday 09 April 2026 00:22:35 +0000 (0:00:00.165) 0:00:06.726 ******** 2026-04-09 00:22:41.354427 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0q6FE2AvsQ51HBBugO7BRm+rzK+GZBO8msZqUHgaAC) 2026-04-09 00:22:41.354436 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC52vkCbzRKYV+XJZYCCks/0Yse1DJXJ4DE7m484gMKJw1FF5M0XCD9SGhmwYBzUM5ai8EyyykncRMLYjDX/9QVNCAdDxGmd9y1ypRLTr1YLpB0s4LFJavnToR4kbJIleBOU0B2LVR/dL9nAPQCuYJSFLkkDpIs6EFkp5WgvWu9ZQHtrcMS/duDcD//QNO3PB5b15yULuDjKuLkwmoC40wE0wBuShEuZBTxNZiXk5DsTcWcBcAdBynW2co6nmAIBVVtSlStmY/AeQGntqXwKagFivVINust7o2zdopgN/9KaJSEDRmM+bJNtskcqOfdHNhpMX3XZt+RGVrlU8/L+Tb7LUzUAmY6kiX6JWbYa8p30LD3nNRnekAVvPq4DxojYTcVgo7NzkUT0qHDVSzwwNFIQnldZnFHFf/wgX1/YJOJeND5Di9gmDBSuWybfeMXKUhkyBdKITfDGN+csP4LBy/kGMvAhHd5TRdVHi7ZtNyYs18soulhYxExdBtkgctrEaM=) 2026-04-09 00:22:41.354443 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA34yK0xUvfcV5jrwgd8PMbvd8fxMjLNVv5neYGgDueIqtTDFKsEiMltk1tTAza00AdLKUgO9dVBNMW+2PDz53w=) 2026-04-09 00:22:41.354450 | orchestrator | 2026-04-09 00:22:41.354455 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:41.354460 | orchestrator | Thursday 09 April 2026 00:22:36 +0000 (0:00:01.206) 0:00:07.933 ******** 2026-04-09 00:22:41.354480 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgNPtHQLxm3Ci4zryH8WywexkJT+Xr6LOPHss+hkDdXNUTY0HfzAMEOd+KyecCKh8Z/xgni7SOCclQiTCV5BeRl+llbCm1b32hLQLzbyJn7ELFq+zxQoIS+RF6bSIUhHpnDrA41944Broh6bY1KRDG2REQJGNPrzqoEqRFrVRaTTR8fBZYEvaga7OBquMNUT3KZy2dH2vuoEW940CYwUKARiSUiA/2VS4cPNhjmDVVy6K+qusNp+ZEo/D9jAAX278XRFi6JPtITXOUEUz1UcZ5y2TVCoiHRku0486DPjavNg4260I892Js7s1yPGoX3hNoVjviFB2OkHrDoHZxpwW/G5HEjuliG214KTLrveuC3nAfZVgUEEkjLTt638AhDnUYqGjiH7LVTyClEU2cHZin8rKmehexY+04x6TSbKyDDvCx8jktn/ufsJNxNeKD1RN7kou7cd660aL+V4pFXrBiySgK/bp097HpmwSRMchSqSHGCnYXrN1t/2CSyd+0PNs=) 2026-04-09 00:22:41.354492 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFtnGrfxzhM1VZqA1T7VGUPRpFDQjywnAuFIVIId05V3gSrWfRaWrsAjbOVRQqh7D3UcEXN9bPbGWGUJX9rcs7I=) 2026-04-09 00:22:41.354497 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMGYBzLgpcaxmi7YgAnGyxoUVdV1B8MZY/TSwQqTjjLx) 2026-04-09 00:22:41.354503 | orchestrator | 2026-04-09 00:22:41.354508 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:41.354513 | orchestrator | Thursday 09 April 2026 00:22:37 +0000 (0:00:00.999) 0:00:08.933 ******** 2026-04-09 00:22:41.354518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN3G6KZs7xiBsw1U7id0/NG1+QAAf4Eod9nJDi4SLJ2eygYyL+QmxsKRIYe0qzUQFBJ4zgXF7Nt50Y//euOeThA=) 2026-04-09 00:22:41.354524 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP0EDiJslYdfZ6/VflG0VwHC6YrImBhTFlGat5hq570y) 2026-04-09 00:22:41.354571 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRDsj5KtPFDBtohBxyX/EV2Xn5MMffsvBosNWrKqUXvJ4HoRq7dt1MJ+UAHonMKa2xzN53bmKH74HkDU6yQTXV+xHx2szaJBoyKDo47izgrF/yh1fkzfmLLZVvp/DGXZwAS86y46/AY086E0bD8B4rlugPINNgxJbTLJy0d0mel9VMSkPaHjXPvYgVKzsq8OyMDUMf7XTghf9U0CkLvN7L/d2dy4sxnSPrp+iUvBquzaqZul6iRJABE7KwliBtstpJwpFYTJqOUoj/TX+QmvUMbgu8QHKI5EaDALej3JW2rCBAtvtPeBpTl91ZI62WakR1A1gzcmFz/aCEIb1NX3LiAgmlKaF9rJLO4/MQbfDyyiYis7mB+Cqak9sTvD/XhhD4kUA5OhCSK2KkEJYSEBGBX2b8tmg7qHnnMwBv1HHiBXbYIZEfe0381ccKS8cfVNRiwbSIT5hz2G5fRGRG6TCJ1ocUzQnjmb6eMabz3PvGttzr4vV7OwSCOM9aBoo3pSU=) 2026-04-09 00:22:41.354577 | orchestrator | 2026-04-09 00:22:41.354582 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:41.354588 | orchestrator | Thursday 09 April 2026 00:22:38 +0000 (0:00:01.043) 0:00:09.976 ******** 2026-04-09 00:22:41.354593 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJQxjkgeO0h7LTqF7vbq1+35V4Pgay4E1gcvddNO2ybb) 2026-04-09 00:22:41.354598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZqEGUtM/kJ5hJtARhFrYP15Te1TCEg4RYPIVR/9fhUTMWD6OPp08lNdsPx3nPoeastqK3A/OqVfJxiIP26kunUekByfpcORbeCbT5X/UvKpFhAGM7q1LB1/4baFpsJ2UCJu3Oz1sFSHbhEizbfzIVgvoZN5FAvDPBhCVg59QkQXzQywe1ei1k3ivqyninBGbE4tnXQQ4Fdr+500JRsRma4mJzBRXHgKbsav1GTxXfknsIlY6SdTwC9j4Hs5WTuSi2EVt5FjWRPKlqYoRDiywIB46wo+sk/YY2rQojytr4aby43FxbM0yPv8nGTyL6Los4WtD/csR8+eRMiVLAzGfVOm6tU+C329hi2M90OnvNcS7YpeKpStqCMwUV+Up+XJxGIev3JbyNSk+JSJ3lGUkB/O96TckZoRJHtcvDCZ+SmfOf3XVz61vJ6+XruHqw+gH//Pdlg0PHWwFnp6MJKd//cMefMPjUbjCQTpRccE3b+AyELAlbrfqO6Bk5xk+Ecws=) 2026-04-09 00:22:41.354604 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJda3WnEMPDnAXLLPLm9QS+HsySYS2AncKsw3SzUuOkSmEoL10VpOn/YOPbgH24Bkr7N7Za0BpCkOiOi1ZjYJLs=) 2026-04-09 00:22:41.354609 | orchestrator | 2026-04-09 00:22:41.354614 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:41.354619 | orchestrator | Thursday 09 April 2026 00:22:39 +0000 (0:00:01.001) 0:00:10.978 ******** 2026-04-09 00:22:41.354625 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC49ZFlAUH8bKnCBfkxS3wQ3+fDNZR1bAP0lylBd8nlOb7/2mS52mbiKPCio+rjGDQtyJjwRvfICc0axMVfQ4/K0oQW9BGC0RdIhAJqvqsSwi6sjoxgLR8VljQ4KFhJtcwZ80NIJMjDB3Wjh6EvMBqPq9YtG22opOrKPEHlznGMUFkln2o/vQLqvNelJ1zPibj5yctVTC/iHSqjL3fV6n8I/aBUI3vY6292NXCXzq+xElTFPlLSsUh58FpC+iOVL8zfMioebBA/jpBlpz7R9c7FSINeCFx0Nlz7klLpxmh/hOkDfiOkPQnKSao+m3jWe+h7UyPr5YtXw9DluaWlLnv91+MO/NZsGxtPvsicOvFtQjQ7Qg2EMv67nocxSShrEAM7DqnTKLWD60OthgThFDduAoki9LzrK27SoaCAghL+yvZhEmBwQ8+RBvlLr3hyiFe+QYa7LXPggk2YsGU5mAgqBQ9m4Rkdii4ZcSJIDBaoQh29027o5taX1WKp1K64hhM=) 2026-04-09 00:22:41.354635 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMU6WEsEJTokOwNIKoOmlQNNs4+1ZRgPIMiLL2Q4nj28M7FVznUwxAAjnw+t+mBRz+wrE6OQSEbllmtWosrKRas=) 2026-04-09 00:22:41.354641 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHrUtC+95Uc658yDfYTwTduQxeAV0d109TPjSx2CWk4A) 2026-04-09 00:22:41.354647 | orchestrator | 2026-04-09 00:22:41.354653 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:41.354658 | orchestrator | Thursday 09 April 2026 00:22:41 +0000 (0:00:01.014) 0:00:11.992 ******** 2026-04-09 00:22:41.354668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAXp+vFdyLAtGGsm5phLK8Lbzr7N3z+0iAVrsW0TUrp20+oFdBl5xKoCPxnu4M4Xitqp7NUe38KSClPDDKJwF9s=) 2026-04-09 00:22:51.819002 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6DNI7Zs6RF4+yI0eu/XssTCJw/f+aYIBw/E4a/r7YPkIMWbgD+jt9cAC1Hkyn219zOMtgoPQD4zI9klAZUVtXxOnnhixUJLgp1kz5LWp1UwJ05a/Df8R5lYlGyx+R3xkqMl5Mhy9MHJPeBwr40RCDFl4iCMoKQeoknU/zObkty+SnyaZpSK8QcKdPJL/S6f4kAgcjZty9TIpm2DjzKTcCmQ3UMUNVGKqtDJVNUR8BTbewJ8VNL0FpMTrFVv6YIgGwYVJUiOw3TWPvA6wPpkZwlQjvDlqx2zdOAeddkHLL0valhRebJkF6Agrd8PEHWDHiENHCyenz9Kf76u2DABiQwB7Po2GhN6ZrsDIeEQ5qz+QFtG8v0puOOivECqw9rgEIBA29HTtXVysPCwMF7NOPS40qKRGIXYzC8inf2b8TaG/xBuE71vFE1xAjB3gEAtQhjLCyWjWzXRdvy+5/jsN6qeczH3Owh5pRPnZHupO6SygxSBPhAR/LrJ3Hc5ZEIk8=) 2026-04-09 00:22:51.819142 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICXRHkiyf+hge4jO7RfiBXqfWAW3wb6Q93ktFlLQY9cM) 2026-04-09 00:22:51.819161 | orchestrator | 2026-04-09 00:22:51.819174 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:51.819187 | orchestrator | Thursday 09 April 2026 00:22:42 +0000 (0:00:01.000) 0:00:12.992 ******** 2026-04-09 00:22:51.819200 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKU1Sf/dUkLts8xPqpfpXR9bIZmN3hG2QAXAveiGYIeGA+eg488oCt9VQqmIuwuL0oeXGmiiad9+aOWxczt9AI4=) 2026-04-09 00:22:51.819213 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhGDgsYA2FjdcuGL+pO2derxisCZJKsdu+g7bDHZN1f) 2026-04-09 00:22:51.819225 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxTMvcKYrb7MuoSdzMKbnTT/IThmRQ8UGLHiVHIgXKvQxRKZtmUaBMvpTRei5Dk5AAMMZBMWZwV2u3SADzFkTvYf/aax6JoGBZvjrE4YZjh7410rJDwNuxObNWjkoBvYpYtEj7vD0k9KONk1VnhmeCuxTq26+GsGSyS1M5nxyNrDZ2cPMTAmL2t2yF5BdPAJDOGbDZmluhLf6wwmG8JvQyTTqgojbZqdE+bZNL0qKedOnKezwos39JUWtjt+1j3wokc9tEQhz2GFTfaeDDfcvjyKBKy77d/x+rryBHrLxSvLRecSOS8JxsJ13B9Kx/HeLdGT1xenbaGZmo+DLC60ArxoqHNRDOCVpJQGVAPfAq+h3xxLxXNX0m0vonT1kiQPjbM7WQVRX5kNOm38nuioheQ7m4U6ietvuGdqEYm1czB/z+1ztNoD3JvRzhjMtqdmqju4AkAuS4lZ0n7V2tB1aStjUzQV30af3oxbo8RT4mhyNghWjdOI92O2zIstL8Oos=) 2026-04-09 00:22:51.819237 | orchestrator | 2026-04-09 00:22:51.819248 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-09 00:22:51.819261 | orchestrator | Thursday 09 April 2026 00:22:43 +0000 (0:00:01.010) 0:00:14.003 ******** 2026-04-09 00:22:51.819273 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:22:51.819284 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:22:51.819295 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:22:51.819306 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:22:51.819338 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:22:51.819349 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:22:51.819368 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:22:51.819380 | orchestrator | 2026-04-09 00:22:51.819391 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-09 00:22:51.819403 | orchestrator | Thursday 09 April 2026 00:22:48 +0000 (0:00:05.238) 0:00:19.242 ******** 2026-04-09 00:22:51.819415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 00:22:51.819429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 00:22:51.819440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 00:22:51.819451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 00:22:51.819462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 00:22:51.819473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 00:22:51.819483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 00:22:51.819495 | orchestrator | 2026-04-09 00:22:51.819522 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:51.819535 | orchestrator | Thursday 09 April 2026 00:22:48 +0000 (0:00:00.178) 0:00:19.420 ******** 2026-04-09 00:22:51.819546 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA34yK0xUvfcV5jrwgd8PMbvd8fxMjLNVv5neYGgDueIqtTDFKsEiMltk1tTAza00AdLKUgO9dVBNMW+2PDz53w=) 2026-04-09 00:22:51.819562 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC52vkCbzRKYV+XJZYCCks/0Yse1DJXJ4DE7m484gMKJw1FF5M0XCD9SGhmwYBzUM5ai8EyyykncRMLYjDX/9QVNCAdDxGmd9y1ypRLTr1YLpB0s4LFJavnToR4kbJIleBOU0B2LVR/dL9nAPQCuYJSFLkkDpIs6EFkp5WgvWu9ZQHtrcMS/duDcD//QNO3PB5b15yULuDjKuLkwmoC40wE0wBuShEuZBTxNZiXk5DsTcWcBcAdBynW2co6nmAIBVVtSlStmY/AeQGntqXwKagFivVINust7o2zdopgN/9KaJSEDRmM+bJNtskcqOfdHNhpMX3XZt+RGVrlU8/L+Tb7LUzUAmY6kiX6JWbYa8p30LD3nNRnekAVvPq4DxojYTcVgo7NzkUT0qHDVSzwwNFIQnldZnFHFf/wgX1/YJOJeND5Di9gmDBSuWybfeMXKUhkyBdKITfDGN+csP4LBy/kGMvAhHd5TRdVHi7ZtNyYs18soulhYxExdBtkgctrEaM=) 2026-04-09 00:22:51.819576 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0q6FE2AvsQ51HBBugO7BRm+rzK+GZBO8msZqUHgaAC) 2026-04-09 00:22:51.819589 | orchestrator | 2026-04-09 00:22:51.819603 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:51.819616 | orchestrator | Thursday 09 April 2026 00:22:49 +0000 (0:00:01.006) 0:00:20.427 ******** 2026-04-09 00:22:51.819629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMGYBzLgpcaxmi7YgAnGyxoUVdV1B8MZY/TSwQqTjjLx) 2026-04-09 00:22:51.819642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgNPtHQLxm3Ci4zryH8WywexkJT+Xr6LOPHss+hkDdXNUTY0HfzAMEOd+KyecCKh8Z/xgni7SOCclQiTCV5BeRl+llbCm1b32hLQLzbyJn7ELFq+zxQoIS+RF6bSIUhHpnDrA41944Broh6bY1KRDG2REQJGNPrzqoEqRFrVRaTTR8fBZYEvaga7OBquMNUT3KZy2dH2vuoEW940CYwUKARiSUiA/2VS4cPNhjmDVVy6K+qusNp+ZEo/D9jAAX278XRFi6JPtITXOUEUz1UcZ5y2TVCoiHRku0486DPjavNg4260I892Js7s1yPGoX3hNoVjviFB2OkHrDoHZxpwW/G5HEjuliG214KTLrveuC3nAfZVgUEEkjLTt638AhDnUYqGjiH7LVTyClEU2cHZin8rKmehexY+04x6TSbKyDDvCx8jktn/ufsJNxNeKD1RN7kou7cd660aL+V4pFXrBiySgK/bp097HpmwSRMchSqSHGCnYXrN1t/2CSyd+0PNs=) 2026-04-09 00:22:51.819662 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFtnGrfxzhM1VZqA1T7VGUPRpFDQjywnAuFIVIId05V3gSrWfRaWrsAjbOVRQqh7D3UcEXN9bPbGWGUJX9rcs7I=) 2026-04-09 00:22:51.819675 | orchestrator | 2026-04-09 00:22:51.819688 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:51.819701 | orchestrator | Thursday 09 April 2026 00:22:50 +0000 (0:00:01.023) 0:00:21.451 ******** 2026-04-09 00:22:51.819714 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN3G6KZs7xiBsw1U7id0/NG1+QAAf4Eod9nJDi4SLJ2eygYyL+QmxsKRIYe0qzUQFBJ4zgXF7Nt50Y//euOeThA=) 2026-04-09 00:22:51.819728 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRDsj5KtPFDBtohBxyX/EV2Xn5MMffsvBosNWrKqUXvJ4HoRq7dt1MJ+UAHonMKa2xzN53bmKH74HkDU6yQTXV+xHx2szaJBoyKDo47izgrF/yh1fkzfmLLZVvp/DGXZwAS86y46/AY086E0bD8B4rlugPINNgxJbTLJy0d0mel9VMSkPaHjXPvYgVKzsq8OyMDUMf7XTghf9U0CkLvN7L/d2dy4sxnSPrp+iUvBquzaqZul6iRJABE7KwliBtstpJwpFYTJqOUoj/TX+QmvUMbgu8QHKI5EaDALej3JW2rCBAtvtPeBpTl91ZI62WakR1A1gzcmFz/aCEIb1NX3LiAgmlKaF9rJLO4/MQbfDyyiYis7mB+Cqak9sTvD/XhhD4kUA5OhCSK2KkEJYSEBGBX2b8tmg7qHnnMwBv1HHiBXbYIZEfe0381ccKS8cfVNRiwbSIT5hz2G5fRGRG6TCJ1ocUzQnjmb6eMabz3PvGttzr4vV7OwSCOM9aBoo3pSU=) 2026-04-09 00:22:51.819741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP0EDiJslYdfZ6/VflG0VwHC6YrImBhTFlGat5hq570y) 2026-04-09 00:22:51.819754 | orchestrator | 2026-04-09 00:22:51.819767 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:51.819779 | orchestrator | Thursday 09 April 2026 00:22:51 +0000 (0:00:00.977) 0:00:22.428 ******** 2026-04-09 00:22:51.819802 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZqEGUtM/kJ5hJtARhFrYP15Te1TCEg4RYPIVR/9fhUTMWD6OPp08lNdsPx3nPoeastqK3A/OqVfJxiIP26kunUekByfpcORbeCbT5X/UvKpFhAGM7q1LB1/4baFpsJ2UCJu3Oz1sFSHbhEizbfzIVgvoZN5FAvDPBhCVg59QkQXzQywe1ei1k3ivqyninBGbE4tnXQQ4Fdr+500JRsRma4mJzBRXHgKbsav1GTxXfknsIlY6SdTwC9j4Hs5WTuSi2EVt5FjWRPKlqYoRDiywIB46wo+sk/YY2rQojytr4aby43FxbM0yPv8nGTyL6Los4WtD/csR8+eRMiVLAzGfVOm6tU+C329hi2M90OnvNcS7YpeKpStqCMwUV+Up+XJxGIev3JbyNSk+JSJ3lGUkB/O96TckZoRJHtcvDCZ+SmfOf3XVz61vJ6+XruHqw+gH//Pdlg0PHWwFnp6MJKd//cMefMPjUbjCQTpRccE3b+AyELAlbrfqO6Bk5xk+Ecws=) 2026-04-09 00:22:56.433680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJda3WnEMPDnAXLLPLm9QS+HsySYS2AncKsw3SzUuOkSmEoL10VpOn/YOPbgH24Bkr7N7Za0BpCkOiOi1ZjYJLs=) 2026-04-09 00:22:56.433782 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJQxjkgeO0h7LTqF7vbq1+35V4Pgay4E1gcvddNO2ybb) 2026-04-09 00:22:56.433799 | orchestrator | 2026-04-09 00:22:56.433813 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:56.433825 | orchestrator | Thursday 09 April 2026 00:22:52 +0000 (0:00:00.999) 0:00:23.427 ******** 2026-04-09 00:22:56.433844 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC49ZFlAUH8bKnCBfkxS3wQ3+fDNZR1bAP0lylBd8nlOb7/2mS52mbiKPCio+rjGDQtyJjwRvfICc0axMVfQ4/K0oQW9BGC0RdIhAJqvqsSwi6sjoxgLR8VljQ4KFhJtcwZ80NIJMjDB3Wjh6EvMBqPq9YtG22opOrKPEHlznGMUFkln2o/vQLqvNelJ1zPibj5yctVTC/iHSqjL3fV6n8I/aBUI3vY6292NXCXzq+xElTFPlLSsUh58FpC+iOVL8zfMioebBA/jpBlpz7R9c7FSINeCFx0Nlz7klLpxmh/hOkDfiOkPQnKSao+m3jWe+h7UyPr5YtXw9DluaWlLnv91+MO/NZsGxtPvsicOvFtQjQ7Qg2EMv67nocxSShrEAM7DqnTKLWD60OthgThFDduAoki9LzrK27SoaCAghL+yvZhEmBwQ8+RBvlLr3hyiFe+QYa7LXPggk2YsGU5mAgqBQ9m4Rkdii4ZcSJIDBaoQh29027o5taX1WKp1K64hhM=) 2026-04-09 00:22:56.433881 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMU6WEsEJTokOwNIKoOmlQNNs4+1ZRgPIMiLL2Q4nj28M7FVznUwxAAjnw+t+mBRz+wrE6OQSEbllmtWosrKRas=) 2026-04-09 00:22:56.433893 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHrUtC+95Uc658yDfYTwTduQxeAV0d109TPjSx2CWk4A) 2026-04-09 00:22:56.433904 | orchestrator | 2026-04-09 00:22:56.433916 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:56.433927 | orchestrator | Thursday 09 April 2026 00:22:53 +0000 (0:00:00.984) 0:00:24.412 ******** 2026-04-09 00:22:56.433938 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICXRHkiyf+hge4jO7RfiBXqfWAW3wb6Q93ktFlLQY9cM) 2026-04-09 00:22:56.433950 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6DNI7Zs6RF4+yI0eu/XssTCJw/f+aYIBw/E4a/r7YPkIMWbgD+jt9cAC1Hkyn219zOMtgoPQD4zI9klAZUVtXxOnnhixUJLgp1kz5LWp1UwJ05a/Df8R5lYlGyx+R3xkqMl5Mhy9MHJPeBwr40RCDFl4iCMoKQeoknU/zObkty+SnyaZpSK8QcKdPJL/S6f4kAgcjZty9TIpm2DjzKTcCmQ3UMUNVGKqtDJVNUR8BTbewJ8VNL0FpMTrFVv6YIgGwYVJUiOw3TWPvA6wPpkZwlQjvDlqx2zdOAeddkHLL0valhRebJkF6Agrd8PEHWDHiENHCyenz9Kf76u2DABiQwB7Po2GhN6ZrsDIeEQ5qz+QFtG8v0puOOivECqw9rgEIBA29HTtXVysPCwMF7NOPS40qKRGIXYzC8inf2b8TaG/xBuE71vFE1xAjB3gEAtQhjLCyWjWzXRdvy+5/jsN6qeczH3Owh5pRPnZHupO6SygxSBPhAR/LrJ3Hc5ZEIk8=) 2026-04-09 00:22:56.433961 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAXp+vFdyLAtGGsm5phLK8Lbzr7N3z+0iAVrsW0TUrp20+oFdBl5xKoCPxnu4M4Xitqp7NUe38KSClPDDKJwF9s=) 2026-04-09 00:22:56.433972 | orchestrator | 2026-04-09 00:22:56.433983 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:56.433994 | orchestrator | Thursday 09 April 2026 00:22:54 +0000 (0:00:00.998) 0:00:25.410 ******** 2026-04-09 00:22:56.434005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxTMvcKYrb7MuoSdzMKbnTT/IThmRQ8UGLHiVHIgXKvQxRKZtmUaBMvpTRei5Dk5AAMMZBMWZwV2u3SADzFkTvYf/aax6JoGBZvjrE4YZjh7410rJDwNuxObNWjkoBvYpYtEj7vD0k9KONk1VnhmeCuxTq26+GsGSyS1M5nxyNrDZ2cPMTAmL2t2yF5BdPAJDOGbDZmluhLf6wwmG8JvQyTTqgojbZqdE+bZNL0qKedOnKezwos39JUWtjt+1j3wokc9tEQhz2GFTfaeDDfcvjyKBKy77d/x+rryBHrLxSvLRecSOS8JxsJ13B9Kx/HeLdGT1xenbaGZmo+DLC60ArxoqHNRDOCVpJQGVAPfAq+h3xxLxXNX0m0vonT1kiQPjbM7WQVRX5kNOm38nuioheQ7m4U6ietvuGdqEYm1czB/z+1ztNoD3JvRzhjMtqdmqju4AkAuS4lZ0n7V2tB1aStjUzQV30af3oxbo8RT4mhyNghWjdOI92O2zIstL8Oos=) 2026-04-09 00:22:56.434124 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKU1Sf/dUkLts8xPqpfpXR9bIZmN3hG2QAXAveiGYIeGA+eg488oCt9VQqmIuwuL0oeXGmiiad9+aOWxczt9AI4=) 2026-04-09 00:22:56.434139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhGDgsYA2FjdcuGL+pO2derxisCZJKsdu+g7bDHZN1f) 2026-04-09 00:22:56.434150 | orchestrator | 2026-04-09 00:22:56.434161 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-09 00:22:56.434172 | orchestrator | Thursday 09 April 2026 00:22:55 +0000 (0:00:01.078) 0:00:26.489 ******** 2026-04-09 00:22:56.434187 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 00:22:56.434207 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 00:22:56.434248 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 00:22:56.434268 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 00:22:56.434287 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 00:22:56.434305 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 00:22:56.434321 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 00:22:56.434354 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:56.434375 | orchestrator | 2026-04-09 00:22:56.434394 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-09 00:22:56.434414 | orchestrator | Thursday 09 April 2026 00:22:55 +0000 (0:00:00.179) 0:00:26.668 ******** 2026-04-09 00:22:56.434433 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:56.434453 | orchestrator | 2026-04-09 00:22:56.434472 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-09 00:22:56.434491 | orchestrator | Thursday 09 April 2026 00:22:55 +0000 (0:00:00.045) 0:00:26.713 ******** 2026-04-09 00:22:56.434504 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:56.434518 | orchestrator | 2026-04-09 00:22:56.434531 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-09 00:22:56.434544 | orchestrator | Thursday 09 April 2026 00:22:55 +0000 (0:00:00.041) 0:00:26.755 ******** 2026-04-09 00:22:56.434556 | orchestrator | changed: [testbed-manager] 2026-04-09 00:22:56.434567 | orchestrator | 2026-04-09 00:22:56.434578 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:22:56.434589 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:22:56.434601 | orchestrator | 2026-04-09 00:22:56.434612 | orchestrator | 2026-04-09 00:22:56.434623 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:22:56.434634 | orchestrator | Thursday 09 April 2026 00:22:56 +0000 (0:00:00.481) 0:00:27.236 ******** 2026-04-09 00:22:56.434644 | orchestrator | =============================================================================== 2026-04-09 00:22:56.434655 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.37s 2026-04-09 00:22:56.434666 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.24s 2026-04-09 00:22:56.434678 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-04-09 00:22:56.434688 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-09 00:22:56.434699 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-09 00:22:56.434710 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-09 00:22:56.434720 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-09 00:22:56.434731 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-09 00:22:56.434742 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-09 00:22:56.434752 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-09 00:22:56.434763 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-09 00:22:56.434774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-09 00:22:56.434784 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-09 00:22:56.434795 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-09 00:22:56.434814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-09 00:22:56.434826 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-09 00:22:56.434836 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2026-04-09 00:22:56.434847 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-09 00:22:56.434858 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-04-09 00:22:56.434870 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-04-09 00:22:56.589569 | orchestrator | + osism apply squid 2026-04-09 00:23:07.852228 | orchestrator | 2026-04-09 00:23:07 | INFO  | Prepare task for execution of squid. 2026-04-09 00:23:07.925658 | orchestrator | 2026-04-09 00:23:07 | INFO  | Task e4379a92-f213-4e76-865c-2ae629b75d57 (squid) was prepared for execution. 2026-04-09 00:23:07.925729 | orchestrator | 2026-04-09 00:23:07 | INFO  | It takes a moment until task e4379a92-f213-4e76-865c-2ae629b75d57 (squid) has been started and output is visible here. 2026-04-09 00:25:11.970950 | orchestrator | 2026-04-09 00:25:11.971092 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-09 00:25:11.971111 | orchestrator | 2026-04-09 00:25:11.971124 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-09 00:25:11.971135 | orchestrator | Thursday 09 April 2026 00:23:11 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-04-09 00:25:11.971147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:25:11.971158 | orchestrator | 2026-04-09 00:25:11.971169 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-09 00:25:11.971181 | orchestrator | Thursday 09 April 2026 00:23:11 +0000 (0:00:00.076) 0:00:00.266 ******** 2026-04-09 00:25:11.971192 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:11.971204 | orchestrator | 2026-04-09 00:25:11.971216 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-09 00:25:11.971227 | orchestrator | Thursday 09 April 2026 00:23:13 +0000 (0:00:02.240) 0:00:02.506 ******** 2026-04-09 00:25:11.971238 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-09 00:25:11.971249 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-09 00:25:11.971260 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-09 00:25:11.971271 | orchestrator | 2026-04-09 00:25:11.971282 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-09 00:25:11.971293 | orchestrator | Thursday 09 April 2026 00:23:14 +0000 (0:00:01.197) 0:00:03.704 ******** 2026-04-09 00:25:11.971304 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-09 00:25:11.971315 | orchestrator | 2026-04-09 00:25:11.971326 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-09 00:25:11.971337 | orchestrator | Thursday 09 April 2026 00:23:15 +0000 (0:00:01.035) 0:00:04.739 ******** 2026-04-09 00:25:11.971348 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:11.971359 | orchestrator | 2026-04-09 00:25:11.971370 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-09 00:25:11.971396 | orchestrator | Thursday 09 April 2026 00:23:15 +0000 (0:00:00.348) 0:00:05.087 ******** 2026-04-09 00:25:11.971408 | orchestrator | changed: [testbed-manager] 2026-04-09 00:25:11.971419 | orchestrator | 2026-04-09 00:25:11.971430 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-09 00:25:11.971441 | orchestrator | Thursday 09 April 2026 00:23:16 +0000 (0:00:00.878) 0:00:05.966 ******** 2026-04-09 00:25:11.971452 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-09 00:25:11.971464 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:11.971475 | orchestrator | 2026-04-09 00:25:11.971486 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-09 00:25:11.971497 | orchestrator | Thursday 09 April 2026 00:23:55 +0000 (0:00:38.519) 0:00:44.485 ******** 2026-04-09 00:25:11.971508 | orchestrator | changed: [testbed-manager] 2026-04-09 00:25:11.971520 | orchestrator | 2026-04-09 00:25:11.971534 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-09 00:25:11.971546 | orchestrator | Thursday 09 April 2026 00:24:11 +0000 (0:00:15.704) 0:01:00.190 ******** 2026-04-09 00:25:11.971559 | orchestrator | Pausing for 60 seconds 2026-04-09 00:25:11.971573 | orchestrator | changed: [testbed-manager] 2026-04-09 00:25:11.971592 | orchestrator | 2026-04-09 00:25:11.971612 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-09 00:25:11.971632 | orchestrator | Thursday 09 April 2026 00:25:11 +0000 (0:01:00.077) 0:02:00.268 ******** 2026-04-09 00:25:11.971675 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:11.971694 | orchestrator | 2026-04-09 00:25:11.971714 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-09 00:25:11.971733 | orchestrator | Thursday 09 April 2026 00:25:11 +0000 (0:00:00.059) 0:02:00.327 ******** 2026-04-09 00:25:11.971753 | orchestrator | changed: [testbed-manager] 2026-04-09 00:25:11.971774 | orchestrator | 2026-04-09 00:25:11.971796 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:25:11.971810 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:25:11.971824 | orchestrator | 2026-04-09 00:25:11.971837 | orchestrator | 2026-04-09 00:25:11.971850 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:25:11.971863 | orchestrator | Thursday 09 April 2026 00:25:11 +0000 (0:00:00.606) 0:02:00.934 ******** 2026-04-09 00:25:11.971874 | orchestrator | =============================================================================== 2026-04-09 00:25:11.971885 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-09 00:25:11.971895 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 38.52s 2026-04-09 00:25:11.971906 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.70s 2026-04-09 00:25:11.971917 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.24s 2026-04-09 00:25:11.971927 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.20s 2026-04-09 00:25:11.971938 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2026-04-09 00:25:11.971949 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-04-09 00:25:11.971960 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2026-04-09 00:25:11.971971 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-04-09 00:25:11.971981 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-09 00:25:11.972018 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-09 00:25:12.133510 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 00:25:12.133593 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-09 00:25:12.140121 | orchestrator | + set -e 2026-04-09 00:25:12.140213 | orchestrator | + NAMESPACE=kolla 2026-04-09 00:25:12.140238 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-09 00:25:12.143645 | orchestrator | ++ semver latest 9.0.0 2026-04-09 00:25:12.191451 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-09 00:25:12.191555 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 00:25:12.191945 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-09 00:25:23.541637 | orchestrator | 2026-04-09 00:25:23 | INFO  | Prepare task for execution of operator. 2026-04-09 00:25:23.616647 | orchestrator | 2026-04-09 00:25:23 | INFO  | Task 22da158f-b711-41a8-83a7-9aaee0b633b3 (operator) was prepared for execution. 2026-04-09 00:25:23.616745 | orchestrator | 2026-04-09 00:25:23 | INFO  | It takes a moment until task 22da158f-b711-41a8-83a7-9aaee0b633b3 (operator) has been started and output is visible here. 2026-04-09 00:25:38.842065 | orchestrator | 2026-04-09 00:25:38.842160 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-09 00:25:38.842171 | orchestrator | 2026-04-09 00:25:38.842178 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:25:38.842186 | orchestrator | Thursday 09 April 2026 00:25:26 +0000 (0:00:00.181) 0:00:00.181 ******** 2026-04-09 00:25:38.842193 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:38.842201 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:38.842208 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:38.842214 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:38.842243 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:38.842250 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:38.842256 | orchestrator | 2026-04-09 00:25:38.842262 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-09 00:25:38.842269 | orchestrator | Thursday 09 April 2026 00:25:30 +0000 (0:00:03.518) 0:00:03.700 ******** 2026-04-09 00:25:38.842275 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:38.842281 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:38.842286 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:38.842291 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:38.842297 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:38.842303 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:38.842308 | orchestrator | 2026-04-09 00:25:38.842314 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-09 00:25:38.842320 | orchestrator | 2026-04-09 00:25:38.842326 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 00:25:38.842333 | orchestrator | Thursday 09 April 2026 00:25:31 +0000 (0:00:00.783) 0:00:04.483 ******** 2026-04-09 00:25:38.842339 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:38.842346 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:38.842352 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:38.842359 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:38.842365 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:38.842372 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:38.842379 | orchestrator | 2026-04-09 00:25:38.842386 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 00:25:38.842394 | orchestrator | Thursday 09 April 2026 00:25:31 +0000 (0:00:00.141) 0:00:04.625 ******** 2026-04-09 00:25:38.842401 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:38.842409 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:38.842416 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:38.842423 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:38.842430 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:38.842438 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:38.842445 | orchestrator | 2026-04-09 00:25:38.842468 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 00:25:38.842476 | orchestrator | Thursday 09 April 2026 00:25:31 +0000 (0:00:00.153) 0:00:04.778 ******** 2026-04-09 00:25:38.842483 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:38.842492 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:38.842499 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:38.842506 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:38.842514 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:38.842522 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:38.842529 | orchestrator | 2026-04-09 00:25:38.842536 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 00:25:38.842544 | orchestrator | Thursday 09 April 2026 00:25:31 +0000 (0:00:00.653) 0:00:05.432 ******** 2026-04-09 00:25:38.842553 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:38.842562 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:38.842571 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:38.842579 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:38.842588 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:38.842596 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:38.842603 | orchestrator | 2026-04-09 00:25:38.842610 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 00:25:38.842618 | orchestrator | Thursday 09 April 2026 00:25:32 +0000 (0:00:00.873) 0:00:06.306 ******** 2026-04-09 00:25:38.842626 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-09 00:25:38.842633 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-09 00:25:38.842640 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-09 00:25:38.842648 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-09 00:25:38.842654 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-09 00:25:38.842661 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-09 00:25:38.842674 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-09 00:25:38.842681 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-09 00:25:38.842687 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-09 00:25:38.842694 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-09 00:25:38.842700 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-09 00:25:38.842708 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-09 00:25:38.842714 | orchestrator | 2026-04-09 00:25:38.842721 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 00:25:38.842727 | orchestrator | Thursday 09 April 2026 00:25:34 +0000 (0:00:01.202) 0:00:07.508 ******** 2026-04-09 00:25:38.842733 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:38.842740 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:38.842746 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:38.842752 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:38.842758 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:38.842765 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:38.842771 | orchestrator | 2026-04-09 00:25:38.842778 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 00:25:38.842784 | orchestrator | Thursday 09 April 2026 00:25:35 +0000 (0:00:01.327) 0:00:08.836 ******** 2026-04-09 00:25:38.842791 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:38.842797 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:38.842803 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:38.842810 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:38.842816 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:38.842838 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:38.842845 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:38.842851 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:38.842857 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:38.842862 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:38.842868 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:38.842874 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:38.842880 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:38.842886 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-09 00:25:38.842892 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-09 00:25:38.842899 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-09 00:25:38.842909 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:38.842915 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:38.842921 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:38.842927 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:38.842934 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:38.842940 | orchestrator | 2026-04-09 00:25:38.842947 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 00:25:38.842954 | orchestrator | Thursday 09 April 2026 00:25:36 +0000 (0:00:01.354) 0:00:10.190 ******** 2026-04-09 00:25:38.842961 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:38.842967 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:38.842974 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:38.842980 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:38.843010 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:38.843017 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:38.843024 | orchestrator | 2026-04-09 00:25:38.843030 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 00:25:38.843036 | orchestrator | Thursday 09 April 2026 00:25:36 +0000 (0:00:00.148) 0:00:10.339 ******** 2026-04-09 00:25:38.843043 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:38.843049 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:38.843055 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:38.843062 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:38.843068 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:38.843074 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:38.843081 | orchestrator | 2026-04-09 00:25:38.843087 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 00:25:38.843093 | orchestrator | Thursday 09 April 2026 00:25:37 +0000 (0:00:00.158) 0:00:10.498 ******** 2026-04-09 00:25:38.843100 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:38.843106 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:38.843113 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:38.843119 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:38.843125 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:38.843132 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:38.843138 | orchestrator | 2026-04-09 00:25:38.843145 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 00:25:38.843151 | orchestrator | Thursday 09 April 2026 00:25:37 +0000 (0:00:00.518) 0:00:11.016 ******** 2026-04-09 00:25:38.843158 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:38.843164 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:38.843171 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:38.843177 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:38.843184 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:38.843190 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:38.843197 | orchestrator | 2026-04-09 00:25:38.843203 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 00:25:38.843210 | orchestrator | Thursday 09 April 2026 00:25:37 +0000 (0:00:00.139) 0:00:11.156 ******** 2026-04-09 00:25:38.843216 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 00:25:38.843223 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:25:38.843230 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:38.843236 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:38.843243 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:25:38.843249 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:38.843256 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 00:25:38.843262 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:38.843269 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:25:38.843275 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:38.843282 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:25:38.843288 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:38.843295 | orchestrator | 2026-04-09 00:25:38.843302 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 00:25:38.843308 | orchestrator | Thursday 09 April 2026 00:25:38 +0000 (0:00:00.862) 0:00:12.019 ******** 2026-04-09 00:25:38.843314 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:38.843321 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:38.843328 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:38.843334 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:38.843341 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:38.843347 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:38.843354 | orchestrator | 2026-04-09 00:25:38.843361 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 00:25:38.843367 | orchestrator | Thursday 09 April 2026 00:25:38 +0000 (0:00:00.141) 0:00:12.160 ******** 2026-04-09 00:25:38.843378 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:38.843385 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:38.843391 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:38.843398 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:38.843411 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:40.076420 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:40.076521 | orchestrator | 2026-04-09 00:25:40.076545 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 00:25:40.076558 | orchestrator | Thursday 09 April 2026 00:25:38 +0000 (0:00:00.158) 0:00:12.318 ******** 2026-04-09 00:25:40.076570 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:40.076581 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:40.076593 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:40.076604 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:40.076615 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:40.076630 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:40.076645 | orchestrator | 2026-04-09 00:25:40.076657 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 00:25:40.076668 | orchestrator | Thursday 09 April 2026 00:25:39 +0000 (0:00:00.137) 0:00:12.456 ******** 2026-04-09 00:25:40.076679 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:40.076689 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:40.076700 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:40.076711 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:40.076722 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:40.076733 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:40.076766 | orchestrator | 2026-04-09 00:25:40.076779 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 00:25:40.076790 | orchestrator | Thursday 09 April 2026 00:25:39 +0000 (0:00:00.674) 0:00:13.131 ******** 2026-04-09 00:25:40.076819 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:40.076837 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:40.076854 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:40.076872 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:40.076890 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:40.076908 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:40.076927 | orchestrator | 2026-04-09 00:25:40.076946 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:25:40.076966 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:40.077013 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:40.077052 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:40.077067 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:40.077080 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:40.077094 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:40.077107 | orchestrator | 2026-04-09 00:25:40.077121 | orchestrator | 2026-04-09 00:25:40.077133 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:25:40.077147 | orchestrator | Thursday 09 April 2026 00:25:39 +0000 (0:00:00.213) 0:00:13.344 ******** 2026-04-09 00:25:40.077160 | orchestrator | =============================================================================== 2026-04-09 00:25:40.077173 | orchestrator | Gathering Facts --------------------------------------------------------- 3.52s 2026-04-09 00:25:40.077211 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.35s 2026-04-09 00:25:40.077226 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.33s 2026-04-09 00:25:40.077238 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2026-04-09 00:25:40.077251 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2026-04-09 00:25:40.077264 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.86s 2026-04-09 00:25:40.077277 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-04-09 00:25:40.077290 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-04-09 00:25:40.077303 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2026-04-09 00:25:40.077314 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.52s 2026-04-09 00:25:40.077325 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-09 00:25:40.077336 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-09 00:25:40.077347 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-04-09 00:25:40.077358 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-09 00:25:40.077369 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-04-09 00:25:40.077380 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-04-09 00:25:40.077391 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-04-09 00:25:40.077401 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.14s 2026-04-09 00:25:40.077412 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-04-09 00:25:40.250313 | orchestrator | + osism apply --environment custom facts 2026-04-09 00:25:41.406533 | orchestrator | 2026-04-09 00:25:41 | INFO  | Trying to run play facts in environment custom 2026-04-09 00:25:51.467782 | orchestrator | 2026-04-09 00:25:51 | INFO  | Prepare task for execution of facts. 2026-04-09 00:25:51.545510 | orchestrator | 2026-04-09 00:25:51 | INFO  | Task 5dd265bc-1028-4691-9588-c617c6ef7f1c (facts) was prepared for execution. 2026-04-09 00:25:51.545634 | orchestrator | 2026-04-09 00:25:51 | INFO  | It takes a moment until task 5dd265bc-1028-4691-9588-c617c6ef7f1c (facts) has been started and output is visible here. 2026-04-09 00:26:37.269291 | orchestrator | 2026-04-09 00:26:37.269389 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-09 00:26:37.269402 | orchestrator | 2026-04-09 00:26:37.269411 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:26:37.269425 | orchestrator | Thursday 09 April 2026 00:25:54 +0000 (0:00:00.114) 0:00:00.114 ******** 2026-04-09 00:26:37.269434 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:37.269443 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:37.269450 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:37.269457 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:37.269463 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:37.269470 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:37.269479 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:37.269491 | orchestrator | 2026-04-09 00:26:37.269501 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-09 00:26:37.269517 | orchestrator | Thursday 09 April 2026 00:25:56 +0000 (0:00:01.445) 0:00:01.560 ******** 2026-04-09 00:26:37.269530 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:37.269540 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:37.269552 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:37.269564 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:37.269594 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:37.269605 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:37.269611 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:37.269618 | orchestrator | 2026-04-09 00:26:37.269625 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-09 00:26:37.269631 | orchestrator | 2026-04-09 00:26:37.269638 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:26:37.269644 | orchestrator | Thursday 09 April 2026 00:25:57 +0000 (0:00:01.355) 0:00:02.916 ******** 2026-04-09 00:26:37.269651 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.269658 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.269665 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.269671 | orchestrator | 2026-04-09 00:26:37.269678 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:26:37.269685 | orchestrator | Thursday 09 April 2026 00:25:57 +0000 (0:00:00.122) 0:00:03.038 ******** 2026-04-09 00:26:37.269692 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.269699 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.269705 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.269712 | orchestrator | 2026-04-09 00:26:37.269718 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:26:37.269725 | orchestrator | Thursday 09 April 2026 00:25:57 +0000 (0:00:00.184) 0:00:03.222 ******** 2026-04-09 00:26:37.269731 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.269738 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.269744 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.269751 | orchestrator | 2026-04-09 00:26:37.269758 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:26:37.269764 | orchestrator | Thursday 09 April 2026 00:25:57 +0000 (0:00:00.208) 0:00:03.431 ******** 2026-04-09 00:26:37.269772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:37.269780 | orchestrator | 2026-04-09 00:26:37.269786 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:26:37.269793 | orchestrator | Thursday 09 April 2026 00:25:58 +0000 (0:00:00.140) 0:00:03.572 ******** 2026-04-09 00:26:37.269799 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.269806 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.269812 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.269819 | orchestrator | 2026-04-09 00:26:37.269826 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:26:37.269832 | orchestrator | Thursday 09 April 2026 00:25:58 +0000 (0:00:00.442) 0:00:04.014 ******** 2026-04-09 00:26:37.269839 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:26:37.269846 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:26:37.269852 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:26:37.269859 | orchestrator | 2026-04-09 00:26:37.269866 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:26:37.269872 | orchestrator | Thursday 09 April 2026 00:25:58 +0000 (0:00:00.120) 0:00:04.135 ******** 2026-04-09 00:26:37.269879 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:37.269885 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:37.269892 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:37.269898 | orchestrator | 2026-04-09 00:26:37.269905 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:26:37.269912 | orchestrator | Thursday 09 April 2026 00:25:59 +0000 (0:00:01.181) 0:00:05.316 ******** 2026-04-09 00:26:37.269918 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.269925 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.269932 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.269938 | orchestrator | 2026-04-09 00:26:37.269945 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:26:37.269952 | orchestrator | Thursday 09 April 2026 00:26:00 +0000 (0:00:00.469) 0:00:05.786 ******** 2026-04-09 00:26:37.269963 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:37.269989 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:37.269997 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:37.270003 | orchestrator | 2026-04-09 00:26:37.270010 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:26:37.270078 | orchestrator | Thursday 09 April 2026 00:26:01 +0000 (0:00:01.104) 0:00:06.890 ******** 2026-04-09 00:26:37.270087 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:37.270094 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:37.270101 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:37.270107 | orchestrator | 2026-04-09 00:26:37.270114 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-09 00:26:37.270121 | orchestrator | Thursday 09 April 2026 00:26:18 +0000 (0:00:17.491) 0:00:24.382 ******** 2026-04-09 00:26:37.270127 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:26:37.270134 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:26:37.270141 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:26:37.270148 | orchestrator | 2026-04-09 00:26:37.270155 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-09 00:26:37.270175 | orchestrator | Thursday 09 April 2026 00:26:18 +0000 (0:00:00.095) 0:00:24.478 ******** 2026-04-09 00:26:37.270183 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:37.270190 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:37.270196 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:37.270203 | orchestrator | 2026-04-09 00:26:37.270210 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:26:37.270217 | orchestrator | Thursday 09 April 2026 00:26:27 +0000 (0:00:08.449) 0:00:32.927 ******** 2026-04-09 00:26:37.270223 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.270230 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.270237 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.270244 | orchestrator | 2026-04-09 00:26:37.270251 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 00:26:37.270257 | orchestrator | Thursday 09 April 2026 00:26:27 +0000 (0:00:00.495) 0:00:33.423 ******** 2026-04-09 00:26:37.270264 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-09 00:26:37.270271 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-09 00:26:37.270278 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-09 00:26:37.270285 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-09 00:26:37.270292 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-09 00:26:37.270298 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-09 00:26:37.270305 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-09 00:26:37.270312 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-09 00:26:37.270318 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-09 00:26:37.270325 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:26:37.270332 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:26:37.270338 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:26:37.270345 | orchestrator | 2026-04-09 00:26:37.270352 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:26:37.270358 | orchestrator | Thursday 09 April 2026 00:26:31 +0000 (0:00:03.552) 0:00:36.975 ******** 2026-04-09 00:26:37.270365 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.270372 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.270379 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.270385 | orchestrator | 2026-04-09 00:26:37.270392 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:26:37.270399 | orchestrator | 2026-04-09 00:26:37.270410 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:26:37.270417 | orchestrator | Thursday 09 April 2026 00:26:32 +0000 (0:00:01.314) 0:00:38.290 ******** 2026-04-09 00:26:37.270424 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:37.270430 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:37.270437 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:37.270444 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:37.270450 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:37.270457 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:37.270464 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:37.270470 | orchestrator | 2026-04-09 00:26:37.270477 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:26:37.270508 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:37.270516 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:37.270524 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:37.270531 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:37.270538 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:26:37.270545 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:26:37.270551 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:26:37.270561 | orchestrator | 2026-04-09 00:26:37.270572 | orchestrator | 2026-04-09 00:26:37.270591 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:26:37.270601 | orchestrator | Thursday 09 April 2026 00:26:37 +0000 (0:00:04.454) 0:00:42.745 ******** 2026-04-09 00:26:37.270612 | orchestrator | =============================================================================== 2026-04-09 00:26:37.270622 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.49s 2026-04-09 00:26:37.270632 | orchestrator | Install required packages (Debian) -------------------------------------- 8.45s 2026-04-09 00:26:37.270642 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.45s 2026-04-09 00:26:37.270653 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2026-04-09 00:26:37.270664 | orchestrator | Create custom facts directory ------------------------------------------- 1.45s 2026-04-09 00:26:37.270676 | orchestrator | Copy fact file ---------------------------------------------------------- 1.36s 2026-04-09 00:26:37.270695 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2026-04-09 00:26:37.445151 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.18s 2026-04-09 00:26:37.445280 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-04-09 00:26:37.445298 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2026-04-09 00:26:37.445311 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-04-09 00:26:37.445323 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-04-09 00:26:37.445335 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-04-09 00:26:37.445347 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-04-09 00:26:37.445360 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-04-09 00:26:37.445399 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-04-09 00:26:37.445412 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-04-09 00:26:37.445424 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-09 00:26:37.619750 | orchestrator | + osism apply bootstrap 2026-04-09 00:26:48.888930 | orchestrator | 2026-04-09 00:26:48 | INFO  | Prepare task for execution of bootstrap. 2026-04-09 00:26:48.958789 | orchestrator | 2026-04-09 00:26:48 | INFO  | Task 2d1a6589-1db0-4d4d-bc1f-a88687f275dd (bootstrap) was prepared for execution. 2026-04-09 00:26:48.958951 | orchestrator | 2026-04-09 00:26:48 | INFO  | It takes a moment until task 2d1a6589-1db0-4d4d-bc1f-a88687f275dd (bootstrap) has been started and output is visible here. 2026-04-09 00:27:05.332451 | orchestrator | 2026-04-09 00:27:05.332560 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-09 00:27:05.332571 | orchestrator | 2026-04-09 00:27:05.332578 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-09 00:27:05.332586 | orchestrator | Thursday 09 April 2026 00:26:52 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-04-09 00:27:05.332592 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:05.332617 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:05.332624 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:05.332638 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:05.332649 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:05.332659 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:05.332666 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:05.332672 | orchestrator | 2026-04-09 00:27:05.332700 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:27:05.332708 | orchestrator | 2026-04-09 00:27:05.332715 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:27:05.332735 | orchestrator | Thursday 09 April 2026 00:26:52 +0000 (0:00:00.321) 0:00:00.512 ******** 2026-04-09 00:27:05.332742 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:05.332754 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:05.332761 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:05.332768 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:05.332774 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:05.332781 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:05.332787 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:05.332794 | orchestrator | 2026-04-09 00:27:05.332800 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-09 00:27:05.332805 | orchestrator | 2026-04-09 00:27:05.332811 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:27:05.332818 | orchestrator | Thursday 09 April 2026 00:26:57 +0000 (0:00:04.797) 0:00:05.309 ******** 2026-04-09 00:27:05.332825 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 00:27:05.332832 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 00:27:05.332838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-09 00:27:05.332845 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 00:27:05.332852 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 00:27:05.332860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:27:05.332865 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-09 00:27:05.332869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 00:27:05.332874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:27:05.332878 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 00:27:05.332882 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-09 00:27:05.332888 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:27:05.332895 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 00:27:05.332925 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:05.332932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:27:05.332938 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:27:05.332945 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:27:05.332953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:27:05.333005 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:27:05.333015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:27:05.333021 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:27:05.333025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-09 00:27:05.333030 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 00:27:05.333035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:27:05.333040 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:27:05.333045 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:05.333050 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 00:27:05.333068 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 00:27:05.333072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:27:05.333077 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 00:27:05.333082 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-09 00:27:05.333086 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 00:27:05.333091 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:05.333095 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-09 00:27:05.333100 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 00:27:05.333104 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:05.333109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:27:05.333113 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:27:05.333118 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:27:05.333123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:27:05.333128 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:27:05.333132 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:27:05.333137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:27:05.333143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:27:05.333149 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:27:05.333156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:27:05.333183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 00:27:05.333189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 00:27:05.333194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:27:05.333199 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:05.333204 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 00:27:05.333208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 00:27:05.333213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 00:27:05.333220 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:05.333227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 00:27:05.333234 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:05.333241 | orchestrator | 2026-04-09 00:27:05.333246 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-09 00:27:05.333251 | orchestrator | 2026-04-09 00:27:05.333256 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-09 00:27:05.333267 | orchestrator | Thursday 09 April 2026 00:26:57 +0000 (0:00:00.449) 0:00:05.759 ******** 2026-04-09 00:27:05.333272 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:05.333277 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:05.333281 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:05.333286 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:05.333290 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:05.333295 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:05.333299 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:05.333304 | orchestrator | 2026-04-09 00:27:05.333308 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-09 00:27:05.333312 | orchestrator | Thursday 09 April 2026 00:26:59 +0000 (0:00:01.436) 0:00:07.196 ******** 2026-04-09 00:27:05.333317 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:05.333322 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:05.333326 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:05.333330 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:05.333334 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:05.333338 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:05.333341 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:05.333345 | orchestrator | 2026-04-09 00:27:05.333349 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-09 00:27:05.333352 | orchestrator | Thursday 09 April 2026 00:27:00 +0000 (0:00:01.276) 0:00:08.473 ******** 2026-04-09 00:27:05.333357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:05.333363 | orchestrator | 2026-04-09 00:27:05.333367 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-09 00:27:05.333371 | orchestrator | Thursday 09 April 2026 00:27:00 +0000 (0:00:00.289) 0:00:08.762 ******** 2026-04-09 00:27:05.333375 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:05.333379 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:05.333382 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:05.333386 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:05.333390 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:05.333394 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:05.333398 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:05.333401 | orchestrator | 2026-04-09 00:27:05.333405 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-09 00:27:05.333409 | orchestrator | Thursday 09 April 2026 00:27:02 +0000 (0:00:01.575) 0:00:10.337 ******** 2026-04-09 00:27:05.333413 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:05.333418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:05.333424 | orchestrator | 2026-04-09 00:27:05.333428 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-09 00:27:05.333432 | orchestrator | Thursday 09 April 2026 00:27:02 +0000 (0:00:00.265) 0:00:10.603 ******** 2026-04-09 00:27:05.333436 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:05.333439 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:05.333443 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:05.333447 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:05.333453 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:05.333459 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:05.333465 | orchestrator | 2026-04-09 00:27:05.333471 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-09 00:27:05.333478 | orchestrator | Thursday 09 April 2026 00:27:03 +0000 (0:00:01.287) 0:00:11.890 ******** 2026-04-09 00:27:05.333484 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:05.333491 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:05.333502 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:05.333516 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:05.333522 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:05.333528 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:05.333535 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:05.333540 | orchestrator | 2026-04-09 00:27:05.333546 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-09 00:27:05.333552 | orchestrator | Thursday 09 April 2026 00:27:04 +0000 (0:00:00.817) 0:00:12.707 ******** 2026-04-09 00:27:05.333557 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:05.333561 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:05.333565 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:05.333568 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:05.333572 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:05.333576 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:05.333580 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:05.333584 | orchestrator | 2026-04-09 00:27:05.333588 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 00:27:05.333593 | orchestrator | Thursday 09 April 2026 00:27:05 +0000 (0:00:00.433) 0:00:13.141 ******** 2026-04-09 00:27:05.333624 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:05.333628 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:05.333637 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:17.437533 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:17.437646 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:17.437663 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:17.437675 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:17.437687 | orchestrator | 2026-04-09 00:27:17.437700 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 00:27:17.437713 | orchestrator | Thursday 09 April 2026 00:27:05 +0000 (0:00:00.229) 0:00:13.371 ******** 2026-04-09 00:27:17.437726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:17.437755 | orchestrator | 2026-04-09 00:27:17.437766 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 00:27:17.437779 | orchestrator | Thursday 09 April 2026 00:27:05 +0000 (0:00:00.305) 0:00:13.677 ******** 2026-04-09 00:27:17.437790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:17.437801 | orchestrator | 2026-04-09 00:27:17.437812 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 00:27:17.437823 | orchestrator | Thursday 09 April 2026 00:27:06 +0000 (0:00:00.311) 0:00:13.988 ******** 2026-04-09 00:27:17.437834 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.437846 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.437858 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.437869 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.437880 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.437890 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.437901 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.437912 | orchestrator | 2026-04-09 00:27:17.437924 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 00:27:17.437935 | orchestrator | Thursday 09 April 2026 00:27:07 +0000 (0:00:01.277) 0:00:15.266 ******** 2026-04-09 00:27:17.437946 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:17.437985 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:17.438005 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:17.438117 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:17.438139 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:17.438190 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:17.438210 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:17.438227 | orchestrator | 2026-04-09 00:27:17.438246 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 00:27:17.438264 | orchestrator | Thursday 09 April 2026 00:27:07 +0000 (0:00:00.253) 0:00:15.519 ******** 2026-04-09 00:27:17.438283 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.438301 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.438320 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.438339 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.438358 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.438377 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.438396 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.438414 | orchestrator | 2026-04-09 00:27:17.438433 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 00:27:17.438453 | orchestrator | Thursday 09 April 2026 00:27:08 +0000 (0:00:00.504) 0:00:16.024 ******** 2026-04-09 00:27:17.438471 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:17.438490 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:17.438509 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:17.438528 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:17.438547 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:17.438565 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:17.438584 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:17.438603 | orchestrator | 2026-04-09 00:27:17.438622 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 00:27:17.438642 | orchestrator | Thursday 09 April 2026 00:27:08 +0000 (0:00:00.236) 0:00:16.260 ******** 2026-04-09 00:27:17.438662 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.438681 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:17.438713 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:17.438732 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:17.438750 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:17.438769 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:17.438788 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:17.438806 | orchestrator | 2026-04-09 00:27:17.438825 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 00:27:17.438844 | orchestrator | Thursday 09 April 2026 00:27:08 +0000 (0:00:00.588) 0:00:16.849 ******** 2026-04-09 00:27:17.438863 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.438881 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:17.438899 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:17.438917 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:17.438936 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:17.438979 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:17.439001 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:17.439022 | orchestrator | 2026-04-09 00:27:17.439042 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 00:27:17.439061 | orchestrator | Thursday 09 April 2026 00:27:10 +0000 (0:00:01.106) 0:00:17.955 ******** 2026-04-09 00:27:17.439079 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.439107 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.439129 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.439147 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.439165 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.439183 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.439201 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.439221 | orchestrator | 2026-04-09 00:27:17.439241 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 00:27:17.439260 | orchestrator | Thursday 09 April 2026 00:27:11 +0000 (0:00:01.138) 0:00:19.094 ******** 2026-04-09 00:27:17.439303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:17.439331 | orchestrator | 2026-04-09 00:27:17.439343 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 00:27:17.439354 | orchestrator | Thursday 09 April 2026 00:27:11 +0000 (0:00:00.320) 0:00:19.415 ******** 2026-04-09 00:27:17.439365 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:17.439376 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:17.439387 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:17.439398 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:17.439409 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:17.439420 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:17.439431 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:17.439442 | orchestrator | 2026-04-09 00:27:17.439453 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:27:17.439464 | orchestrator | Thursday 09 April 2026 00:27:12 +0000 (0:00:01.443) 0:00:20.859 ******** 2026-04-09 00:27:17.439475 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.439486 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.439497 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.439508 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.439519 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.439530 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.439540 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.439551 | orchestrator | 2026-04-09 00:27:17.439562 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:27:17.439574 | orchestrator | Thursday 09 April 2026 00:27:13 +0000 (0:00:00.221) 0:00:21.080 ******** 2026-04-09 00:27:17.439584 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.439595 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.439606 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.439617 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.439628 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.439639 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.439649 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.439660 | orchestrator | 2026-04-09 00:27:17.439671 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:27:17.439682 | orchestrator | Thursday 09 April 2026 00:27:13 +0000 (0:00:00.212) 0:00:21.293 ******** 2026-04-09 00:27:17.439693 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.439704 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.439715 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.439726 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.439736 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.439747 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.439758 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.439768 | orchestrator | 2026-04-09 00:27:17.439779 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:27:17.439790 | orchestrator | Thursday 09 April 2026 00:27:13 +0000 (0:00:00.207) 0:00:21.500 ******** 2026-04-09 00:27:17.439802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:17.439815 | orchestrator | 2026-04-09 00:27:17.439826 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:27:17.439837 | orchestrator | Thursday 09 April 2026 00:27:13 +0000 (0:00:00.261) 0:00:21.762 ******** 2026-04-09 00:27:17.439848 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.439859 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.439869 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.439880 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.439891 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.439902 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.439912 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.439923 | orchestrator | 2026-04-09 00:27:17.439934 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:27:17.440044 | orchestrator | Thursday 09 April 2026 00:27:14 +0000 (0:00:00.533) 0:00:22.295 ******** 2026-04-09 00:27:17.440059 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:17.440070 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:17.440081 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:17.440093 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:17.440104 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:17.440115 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:17.440126 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:17.440137 | orchestrator | 2026-04-09 00:27:17.440147 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:27:17.440158 | orchestrator | Thursday 09 April 2026 00:27:14 +0000 (0:00:00.232) 0:00:22.528 ******** 2026-04-09 00:27:17.440169 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.440180 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:17.440191 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.440202 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:17.440212 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:17.440223 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.440234 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.440245 | orchestrator | 2026-04-09 00:27:17.440256 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:27:17.440267 | orchestrator | Thursday 09 April 2026 00:27:15 +0000 (0:00:01.202) 0:00:23.731 ******** 2026-04-09 00:27:17.440278 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.440289 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:17.440299 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:17.440310 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.440321 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:17.440331 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:17.440342 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:17.440353 | orchestrator | 2026-04-09 00:27:17.440364 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:27:17.440374 | orchestrator | Thursday 09 April 2026 00:27:16 +0000 (0:00:00.564) 0:00:24.295 ******** 2026-04-09 00:27:17.440385 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:17.440396 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:17.440407 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:17.440418 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:17.440437 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:59.528569 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.528663 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.528675 | orchestrator | 2026-04-09 00:27:59.528684 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:27:59.528693 | orchestrator | Thursday 09 April 2026 00:27:17 +0000 (0:00:01.181) 0:00:25.477 ******** 2026-04-09 00:27:59.528701 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.528722 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.528730 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.528738 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:59.528747 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:59.528754 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:59.528762 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:59.528769 | orchestrator | 2026-04-09 00:27:59.528778 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-09 00:27:59.528785 | orchestrator | Thursday 09 April 2026 00:27:35 +0000 (0:00:18.445) 0:00:43.923 ******** 2026-04-09 00:27:59.528793 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.528801 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.528808 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.528815 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.528822 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.528830 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.528837 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.528864 | orchestrator | 2026-04-09 00:27:59.528873 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-09 00:27:59.528880 | orchestrator | Thursday 09 April 2026 00:27:36 +0000 (0:00:00.205) 0:00:44.129 ******** 2026-04-09 00:27:59.528887 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.528895 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.528901 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.528908 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.528915 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.528921 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.528928 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.528935 | orchestrator | 2026-04-09 00:27:59.528975 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-09 00:27:59.528983 | orchestrator | Thursday 09 April 2026 00:27:36 +0000 (0:00:00.212) 0:00:44.341 ******** 2026-04-09 00:27:59.528989 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.528996 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.529003 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.529010 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.529016 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.529023 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.529029 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.529036 | orchestrator | 2026-04-09 00:27:59.529043 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-09 00:27:59.529049 | orchestrator | Thursday 09 April 2026 00:27:36 +0000 (0:00:00.214) 0:00:44.555 ******** 2026-04-09 00:27:59.529058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:59.529067 | orchestrator | 2026-04-09 00:27:59.529074 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-09 00:27:59.529080 | orchestrator | Thursday 09 April 2026 00:27:36 +0000 (0:00:00.275) 0:00:44.830 ******** 2026-04-09 00:27:59.529087 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.529094 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.529100 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.529107 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.529114 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.529120 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.529127 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.529134 | orchestrator | 2026-04-09 00:27:59.529140 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-09 00:27:59.529147 | orchestrator | Thursday 09 April 2026 00:27:38 +0000 (0:00:01.928) 0:00:46.759 ******** 2026-04-09 00:27:59.529153 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:59.529160 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:59.529167 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:59.529188 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:59.529195 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:59.529202 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:59.529213 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:59.529220 | orchestrator | 2026-04-09 00:27:59.529227 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-09 00:27:59.529233 | orchestrator | Thursday 09 April 2026 00:27:39 +0000 (0:00:01.037) 0:00:47.796 ******** 2026-04-09 00:27:59.529240 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.529247 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.529253 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.529260 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.529267 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.529273 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.529280 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.529286 | orchestrator | 2026-04-09 00:27:59.529293 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-09 00:27:59.529306 | orchestrator | Thursday 09 April 2026 00:27:40 +0000 (0:00:00.856) 0:00:48.653 ******** 2026-04-09 00:27:59.529314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:59.529323 | orchestrator | 2026-04-09 00:27:59.529329 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-09 00:27:59.529337 | orchestrator | Thursday 09 April 2026 00:27:41 +0000 (0:00:00.335) 0:00:48.988 ******** 2026-04-09 00:27:59.529343 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:59.529350 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:59.529357 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:59.529363 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:59.529370 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:59.529377 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:59.529384 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:59.529390 | orchestrator | 2026-04-09 00:27:59.529408 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-09 00:27:59.529416 | orchestrator | Thursday 09 April 2026 00:27:42 +0000 (0:00:01.029) 0:00:50.017 ******** 2026-04-09 00:27:59.529423 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:59.529430 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:59.529436 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:59.529443 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:59.529450 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:59.529456 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:59.529463 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:59.529470 | orchestrator | 2026-04-09 00:27:59.529480 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-09 00:27:59.529491 | orchestrator | Thursday 09 April 2026 00:27:42 +0000 (0:00:00.204) 0:00:50.221 ******** 2026-04-09 00:27:59.529502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:59.529512 | orchestrator | 2026-04-09 00:27:59.529522 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-09 00:27:59.529533 | orchestrator | Thursday 09 April 2026 00:27:42 +0000 (0:00:00.280) 0:00:50.502 ******** 2026-04-09 00:27:59.529543 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.529554 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.529564 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.529574 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.529584 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.529594 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.529604 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.529616 | orchestrator | 2026-04-09 00:27:59.529626 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-09 00:27:59.529637 | orchestrator | Thursday 09 April 2026 00:27:44 +0000 (0:00:01.919) 0:00:52.422 ******** 2026-04-09 00:27:59.529647 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:59.529657 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:59.529668 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:59.529679 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:59.529691 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:59.529703 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:59.529713 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:59.529724 | orchestrator | 2026-04-09 00:27:59.529736 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-09 00:27:59.529748 | orchestrator | Thursday 09 April 2026 00:27:45 +0000 (0:00:01.153) 0:00:53.575 ******** 2026-04-09 00:27:59.529759 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:59.529768 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:59.529782 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:59.529788 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:59.529795 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:59.529802 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:59.529808 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:59.529815 | orchestrator | 2026-04-09 00:27:59.529822 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-09 00:27:59.529828 | orchestrator | Thursday 09 April 2026 00:27:56 +0000 (0:00:10.546) 0:01:04.121 ******** 2026-04-09 00:27:59.529835 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.529842 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.529848 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.529855 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.529862 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.529868 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.529875 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.529882 | orchestrator | 2026-04-09 00:27:59.529888 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-09 00:27:59.529895 | orchestrator | Thursday 09 April 2026 00:27:57 +0000 (0:00:01.471) 0:01:05.592 ******** 2026-04-09 00:27:59.529902 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.529908 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.529915 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.529921 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.529928 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.529935 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.529962 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.529970 | orchestrator | 2026-04-09 00:27:59.529982 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-09 00:27:59.529998 | orchestrator | Thursday 09 April 2026 00:27:58 +0000 (0:00:01.141) 0:01:06.734 ******** 2026-04-09 00:27:59.530005 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.530012 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.530070 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.530077 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.530084 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.530090 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.530097 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.530103 | orchestrator | 2026-04-09 00:27:59.530110 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-09 00:27:59.530117 | orchestrator | Thursday 09 April 2026 00:27:59 +0000 (0:00:00.242) 0:01:06.976 ******** 2026-04-09 00:27:59.530124 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:59.530131 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:59.530137 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:59.530144 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:59.530151 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:59.530157 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:59.530164 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:59.530170 | orchestrator | 2026-04-09 00:27:59.530177 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-09 00:27:59.530183 | orchestrator | Thursday 09 April 2026 00:27:59 +0000 (0:00:00.231) 0:01:07.208 ******** 2026-04-09 00:27:59.530190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:59.530198 | orchestrator | 2026-04-09 00:27:59.530213 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-09 00:30:09.157041 | orchestrator | Thursday 09 April 2026 00:27:59 +0000 (0:00:00.253) 0:01:07.462 ******** 2026-04-09 00:30:09.157159 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:09.157177 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:09.157189 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:09.157201 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:09.157212 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:09.157246 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:09.157258 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:09.157269 | orchestrator | 2026-04-09 00:30:09.157281 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-09 00:30:09.157293 | orchestrator | Thursday 09 April 2026 00:28:01 +0000 (0:00:01.805) 0:01:09.267 ******** 2026-04-09 00:30:09.157304 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:09.157316 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:09.157327 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:09.157338 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:09.157348 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:09.157359 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:09.157370 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:09.157381 | orchestrator | 2026-04-09 00:30:09.157392 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-09 00:30:09.157404 | orchestrator | Thursday 09 April 2026 00:28:01 +0000 (0:00:00.639) 0:01:09.907 ******** 2026-04-09 00:30:09.157414 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:09.157425 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:09.157436 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:09.157447 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:09.157458 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:09.157469 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:09.157479 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:09.157490 | orchestrator | 2026-04-09 00:30:09.157501 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-09 00:30:09.157512 | orchestrator | Thursday 09 April 2026 00:28:02 +0000 (0:00:00.280) 0:01:10.188 ******** 2026-04-09 00:30:09.157523 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:09.157537 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:09.157551 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:09.157564 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:09.157576 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:09.157589 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:09.157601 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:09.157613 | orchestrator | 2026-04-09 00:30:09.157626 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-09 00:30:09.157639 | orchestrator | Thursday 09 April 2026 00:28:03 +0000 (0:00:01.315) 0:01:11.503 ******** 2026-04-09 00:30:09.157651 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:09.157664 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:09.157677 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:09.157690 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:09.157702 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:09.157715 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:09.157728 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:09.157741 | orchestrator | 2026-04-09 00:30:09.157754 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-09 00:30:09.157767 | orchestrator | Thursday 09 April 2026 00:28:05 +0000 (0:00:01.757) 0:01:13.260 ******** 2026-04-09 00:30:09.157780 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:09.157792 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:09.157805 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:09.157819 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:09.157831 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:09.157844 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:09.157857 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:09.157870 | orchestrator | 2026-04-09 00:30:09.157942 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-09 00:30:09.157956 | orchestrator | Thursday 09 April 2026 00:28:08 +0000 (0:00:02.722) 0:01:15.983 ******** 2026-04-09 00:30:09.157968 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:09.157979 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:09.157990 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:09.158001 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:09.158078 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:09.158093 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:09.158104 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:09.158115 | orchestrator | 2026-04-09 00:30:09.158126 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-09 00:30:09.158152 | orchestrator | Thursday 09 April 2026 00:28:41 +0000 (0:00:33.093) 0:01:49.077 ******** 2026-04-09 00:30:09.158164 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:09.158175 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:09.158186 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:09.158197 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:09.158208 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:09.158220 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:09.158231 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:09.158241 | orchestrator | 2026-04-09 00:30:09.158253 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-09 00:30:09.158264 | orchestrator | Thursday 09 April 2026 00:29:54 +0000 (0:01:13.846) 0:03:02.923 ******** 2026-04-09 00:30:09.158275 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:09.158286 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:09.158297 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:09.158308 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:09.158319 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:09.158330 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:09.158341 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:09.158352 | orchestrator | 2026-04-09 00:30:09.158363 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-09 00:30:09.158374 | orchestrator | Thursday 09 April 2026 00:29:56 +0000 (0:00:01.865) 0:03:04.789 ******** 2026-04-09 00:30:09.158385 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:09.158396 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:09.158407 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:09.158418 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:09.158429 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:09.158440 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:09.158451 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:09.158462 | orchestrator | 2026-04-09 00:30:09.158473 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-09 00:30:09.158484 | orchestrator | Thursday 09 April 2026 00:30:08 +0000 (0:00:11.248) 0:03:16.038 ******** 2026-04-09 00:30:09.158527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-09 00:30:09.158550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-09 00:30:09.158566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-09 00:30:09.158579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 00:30:09.158603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 00:30:09.158615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-09 00:30:09.158626 | orchestrator | 2026-04-09 00:30:09.158638 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-09 00:30:09.158649 | orchestrator | Thursday 09 April 2026 00:30:08 +0000 (0:00:00.367) 0:03:16.406 ******** 2026-04-09 00:30:09.158660 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:30:09.158671 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:09.158682 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:30:09.158694 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:30:09.158705 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:09.158716 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:09.158727 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:30:09.158738 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:09.158749 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:30:09.158760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:30:09.158771 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:30:09.158781 | orchestrator | 2026-04-09 00:30:09.158792 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-09 00:30:09.158803 | orchestrator | Thursday 09 April 2026 00:30:09 +0000 (0:00:00.635) 0:03:17.041 ******** 2026-04-09 00:30:09.158814 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:30:09.158833 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:30:09.158845 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:30:09.158856 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:30:09.158867 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:30:09.158928 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:30:15.931018 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:30:15.931117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:30:15.931127 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:30:15.931135 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:30:15.931144 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:15.931154 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:30:15.931179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:30:15.931187 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:30:15.931195 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:30:15.931202 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:30:15.931209 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:30:15.931216 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:30:15.931224 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:30:15.931231 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:30:15.931239 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:30:15.931246 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:30:15.931253 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:15.931261 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:30:15.931268 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:30:15.931275 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:30:15.931283 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:30:15.931290 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:30:15.931297 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:30:15.931304 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:30:15.931311 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:30:15.931319 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:30:15.931326 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:30:15.931333 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:30:15.931340 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:30:15.931359 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:30:15.931367 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:30:15.931374 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:15.931382 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:30:15.931389 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:30:15.931396 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:30:15.931403 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:30:15.931410 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:30:15.931418 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:15.931425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:30:15.931438 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:30:15.931445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:30:15.931453 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:30:15.931460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:30:15.931480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:30:15.931488 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:30:15.931495 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:30:15.931502 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:30:15.931509 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:30:15.931516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:30:15.931525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:30:15.931534 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:30:15.931542 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:30:15.931551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:30:15.931559 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:30:15.931567 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:30:15.931576 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:30:15.931584 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:30:15.931593 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:30:15.931602 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:30:15.931610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:30:15.931618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:30:15.931627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:30:15.931635 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:30:15.931643 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:30:15.931651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:30:15.931658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:30:15.931665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:30:15.931672 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:30:15.931679 | orchestrator | 2026-04-09 00:30:15.931688 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-09 00:30:15.931695 | orchestrator | Thursday 09 April 2026 00:30:13 +0000 (0:00:04.768) 0:03:21.810 ******** 2026-04-09 00:30:15.931703 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:30:15.931715 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:30:15.931736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:30:15.931749 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:30:15.931756 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:30:15.931763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:30:15.931771 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:30:15.931778 | orchestrator | 2026-04-09 00:30:15.931785 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-09 00:30:15.931792 | orchestrator | Thursday 09 April 2026 00:30:15 +0000 (0:00:01.506) 0:03:23.317 ******** 2026-04-09 00:30:15.931799 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:15.931806 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:15.931814 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:15.931821 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:15.931828 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:15.931838 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:15.931850 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:15.931862 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:15.931896 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:15.931910 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:15.931929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:28.212246 | orchestrator | 2026-04-09 00:30:28.212430 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-09 00:30:28.212446 | orchestrator | Thursday 09 April 2026 00:30:15 +0000 (0:00:00.579) 0:03:23.896 ******** 2026-04-09 00:30:28.212454 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:28.212463 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:28.212472 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:28.212480 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:28.212487 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:28.212494 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:28.212501 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:28.212508 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:28.212515 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:28.212522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:28.212529 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:28.212536 | orchestrator | 2026-04-09 00:30:28.212543 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-09 00:30:28.212550 | orchestrator | Thursday 09 April 2026 00:30:16 +0000 (0:00:00.551) 0:03:24.447 ******** 2026-04-09 00:30:28.212557 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:28.212564 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:28.212590 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:28.212597 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:28.212604 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:28.212611 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:28.212617 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:28.212624 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:28.212629 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:30:28.212636 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:30:28.212642 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:30:28.212648 | orchestrator | 2026-04-09 00:30:28.212655 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-09 00:30:28.212661 | orchestrator | Thursday 09 April 2026 00:30:17 +0000 (0:00:00.701) 0:03:25.149 ******** 2026-04-09 00:30:28.212667 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:28.212674 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:28.212680 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:28.212686 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:28.212692 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:28.212698 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:28.212704 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:28.212710 | orchestrator | 2026-04-09 00:30:28.212716 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-09 00:30:28.212723 | orchestrator | Thursday 09 April 2026 00:30:17 +0000 (0:00:00.285) 0:03:25.435 ******** 2026-04-09 00:30:28.212729 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:28.212737 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:28.212743 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:28.212749 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:28.212755 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:28.212760 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:28.212767 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:28.212772 | orchestrator | 2026-04-09 00:30:28.212779 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-09 00:30:28.212785 | orchestrator | Thursday 09 April 2026 00:30:22 +0000 (0:00:04.928) 0:03:30.363 ******** 2026-04-09 00:30:28.212791 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-09 00:30:28.212798 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-09 00:30:28.212805 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:28.212811 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:28.212818 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-09 00:30:28.212825 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-09 00:30:28.212832 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:28.212839 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-09 00:30:28.212847 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:28.212856 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-09 00:30:28.212885 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:28.212893 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:28.212901 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-09 00:30:28.212908 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:28.212915 | orchestrator | 2026-04-09 00:30:28.212923 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-09 00:30:28.212930 | orchestrator | Thursday 09 April 2026 00:30:22 +0000 (0:00:00.324) 0:03:30.690 ******** 2026-04-09 00:30:28.212937 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-09 00:30:28.212945 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-09 00:30:28.212952 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-09 00:30:28.212986 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-09 00:30:28.212995 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-09 00:30:28.213004 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-09 00:30:28.213011 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-09 00:30:28.213019 | orchestrator | 2026-04-09 00:30:28.213027 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-09 00:30:28.213036 | orchestrator | Thursday 09 April 2026 00:30:23 +0000 (0:00:01.170) 0:03:31.860 ******** 2026-04-09 00:30:28.213045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:30:28.213053 | orchestrator | 2026-04-09 00:30:28.213061 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-09 00:30:28.213069 | orchestrator | Thursday 09 April 2026 00:30:24 +0000 (0:00:00.399) 0:03:32.260 ******** 2026-04-09 00:30:28.213077 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:28.213086 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:28.213094 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:28.213102 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:28.213109 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:28.213116 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:28.213123 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:28.213130 | orchestrator | 2026-04-09 00:30:28.213138 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-09 00:30:28.213145 | orchestrator | Thursday 09 April 2026 00:30:25 +0000 (0:00:01.461) 0:03:33.721 ******** 2026-04-09 00:30:28.213153 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:28.213160 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:28.213168 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:28.213176 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:28.213183 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:28.213190 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:28.213198 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:28.213205 | orchestrator | 2026-04-09 00:30:28.213211 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-09 00:30:28.213218 | orchestrator | Thursday 09 April 2026 00:30:26 +0000 (0:00:00.609) 0:03:34.330 ******** 2026-04-09 00:30:28.213224 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:28.213231 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:28.213238 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:28.213245 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:28.213252 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:28.213259 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:28.213265 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:28.213271 | orchestrator | 2026-04-09 00:30:28.213278 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-09 00:30:28.213284 | orchestrator | Thursday 09 April 2026 00:30:27 +0000 (0:00:00.661) 0:03:34.992 ******** 2026-04-09 00:30:28.213291 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:28.213297 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:28.213304 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:28.213310 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:28.213316 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:28.213322 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:28.213345 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:28.213353 | orchestrator | 2026-04-09 00:30:28.213359 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-09 00:30:28.213366 | orchestrator | Thursday 09 April 2026 00:30:27 +0000 (0:00:00.620) 0:03:35.613 ******** 2026-04-09 00:30:28.213377 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693118.1432734, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:28.213394 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693133.5071442, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:28.213401 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693147.68768, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:28.213428 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693150.0178843, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753690 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693159.0397758, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753783 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693162.7407148, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753791 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693138.284982, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753809 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753828 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753834 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753838 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753937 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753945 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753950 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:33.753955 | orchestrator | 2026-04-09 00:30:33.753961 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-09 00:30:33.753967 | orchestrator | Thursday 09 April 2026 00:30:28 +0000 (0:00:00.976) 0:03:36.589 ******** 2026-04-09 00:30:33.753978 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:33.753985 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:33.753989 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:33.753994 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:33.753999 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:33.754003 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:33.754008 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:33.754012 | orchestrator | 2026-04-09 00:30:33.754052 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-09 00:30:33.754057 | orchestrator | Thursday 09 April 2026 00:30:29 +0000 (0:00:01.083) 0:03:37.672 ******** 2026-04-09 00:30:33.754062 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:33.754067 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:33.754071 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:33.754079 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:33.754084 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:33.754089 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:33.754093 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:33.754098 | orchestrator | 2026-04-09 00:30:33.754103 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-09 00:30:33.754108 | orchestrator | Thursday 09 April 2026 00:30:30 +0000 (0:00:01.191) 0:03:38.864 ******** 2026-04-09 00:30:33.754112 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:33.754117 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:33.754121 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:33.754126 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:33.754131 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:33.754135 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:33.754140 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:33.754144 | orchestrator | 2026-04-09 00:30:33.754149 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-09 00:30:33.754154 | orchestrator | Thursday 09 April 2026 00:30:32 +0000 (0:00:01.339) 0:03:40.204 ******** 2026-04-09 00:30:33.754158 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:33.754163 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:33.754168 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:33.754172 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:33.754177 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:33.754181 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:33.754186 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:33.754190 | orchestrator | 2026-04-09 00:30:33.754195 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-09 00:30:33.754200 | orchestrator | Thursday 09 April 2026 00:30:32 +0000 (0:00:00.276) 0:03:40.480 ******** 2026-04-09 00:30:33.754204 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:33.754210 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:33.754214 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:33.754219 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:33.754224 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:33.754229 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:33.754234 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:33.754240 | orchestrator | 2026-04-09 00:30:33.754245 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-09 00:30:33.754251 | orchestrator | Thursday 09 April 2026 00:30:33 +0000 (0:00:00.822) 0:03:41.303 ******** 2026-04-09 00:30:33.754258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:30:33.754265 | orchestrator | 2026-04-09 00:30:33.754270 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-09 00:30:33.754280 | orchestrator | Thursday 09 April 2026 00:30:33 +0000 (0:00:00.384) 0:03:41.688 ******** 2026-04-09 00:31:51.319112 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.319255 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:51.319277 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:51.319289 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:51.319300 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:51.319311 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:51.319323 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:51.319335 | orchestrator | 2026-04-09 00:31:51.319347 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-09 00:31:51.319360 | orchestrator | Thursday 09 April 2026 00:30:42 +0000 (0:00:08.679) 0:03:50.367 ******** 2026-04-09 00:31:51.319371 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.319400 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:51.319423 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:51.319434 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:51.319445 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:51.319456 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:51.319467 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:51.319478 | orchestrator | 2026-04-09 00:31:51.319489 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-09 00:31:51.319500 | orchestrator | Thursday 09 April 2026 00:30:43 +0000 (0:00:01.324) 0:03:51.692 ******** 2026-04-09 00:31:51.319511 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.319522 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:51.319533 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:51.319544 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:51.319555 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:51.319566 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:51.319577 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:51.319587 | orchestrator | 2026-04-09 00:31:51.319599 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-09 00:31:51.319610 | orchestrator | Thursday 09 April 2026 00:30:44 +0000 (0:00:01.013) 0:03:52.705 ******** 2026-04-09 00:31:51.319621 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.319634 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:51.319647 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:51.319661 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:51.319674 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:51.319694 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:51.319712 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:51.319730 | orchestrator | 2026-04-09 00:31:51.319750 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-09 00:31:51.319771 | orchestrator | Thursday 09 April 2026 00:30:45 +0000 (0:00:00.288) 0:03:52.994 ******** 2026-04-09 00:31:51.319814 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.319828 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:51.319839 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:51.319850 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:51.319863 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:51.319882 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:51.319899 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:51.319916 | orchestrator | 2026-04-09 00:31:51.319935 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-09 00:31:51.319956 | orchestrator | Thursday 09 April 2026 00:30:45 +0000 (0:00:00.310) 0:03:53.305 ******** 2026-04-09 00:31:51.319974 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.319993 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:51.320004 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:51.320015 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:51.320026 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:51.320037 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:51.320055 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:51.320073 | orchestrator | 2026-04-09 00:31:51.320092 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-09 00:31:51.320109 | orchestrator | Thursday 09 April 2026 00:30:45 +0000 (0:00:00.309) 0:03:53.615 ******** 2026-04-09 00:31:51.320161 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:51.320182 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:51.320200 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:51.320211 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:51.320222 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:51.320233 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:51.320244 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.320255 | orchestrator | 2026-04-09 00:31:51.320266 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-09 00:31:51.320276 | orchestrator | Thursday 09 April 2026 00:30:50 +0000 (0:00:04.662) 0:03:58.278 ******** 2026-04-09 00:31:51.320290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:51.320303 | orchestrator | 2026-04-09 00:31:51.320314 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-09 00:31:51.320331 | orchestrator | Thursday 09 April 2026 00:30:50 +0000 (0:00:00.378) 0:03:58.656 ******** 2026-04-09 00:31:51.320348 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-09 00:31:51.320365 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-09 00:31:51.320386 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-09 00:31:51.320399 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-09 00:31:51.320410 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:51.320421 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-09 00:31:51.320432 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-09 00:31:51.320443 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:51.320454 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-09 00:31:51.320464 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-09 00:31:51.320475 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:51.320486 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-09 00:31:51.320497 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-09 00:31:51.320508 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:51.320519 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:51.320530 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-09 00:31:51.320562 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-09 00:31:51.320574 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:51.320585 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-09 00:31:51.320596 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-09 00:31:51.320607 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:51.320618 | orchestrator | 2026-04-09 00:31:51.320629 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-09 00:31:51.320640 | orchestrator | Thursday 09 April 2026 00:30:51 +0000 (0:00:00.309) 0:03:58.966 ******** 2026-04-09 00:31:51.320651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:51.320662 | orchestrator | 2026-04-09 00:31:51.320673 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-09 00:31:51.320684 | orchestrator | Thursday 09 April 2026 00:30:51 +0000 (0:00:00.460) 0:03:59.426 ******** 2026-04-09 00:31:51.320695 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-09 00:31:51.320705 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-09 00:31:51.320717 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:51.320728 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-09 00:31:51.320749 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:51.320760 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:51.320771 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-09 00:31:51.320866 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-09 00:31:51.320889 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:51.320904 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-09 00:31:51.320921 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:51.320939 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:51.320957 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-09 00:31:51.320976 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:51.320996 | orchestrator | 2026-04-09 00:31:51.321015 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-09 00:31:51.321033 | orchestrator | Thursday 09 April 2026 00:30:51 +0000 (0:00:00.344) 0:03:59.771 ******** 2026-04-09 00:31:51.321044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:51.321055 | orchestrator | 2026-04-09 00:31:51.321084 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-09 00:31:51.321096 | orchestrator | Thursday 09 April 2026 00:30:52 +0000 (0:00:00.408) 0:04:00.179 ******** 2026-04-09 00:31:51.321111 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:51.321122 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:51.321133 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:51.321144 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:51.321155 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:51.321165 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:51.321176 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:51.321187 | orchestrator | 2026-04-09 00:31:51.321198 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-09 00:31:51.321208 | orchestrator | Thursday 09 April 2026 00:31:26 +0000 (0:00:34.336) 0:04:34.516 ******** 2026-04-09 00:31:51.321219 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:51.321230 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:51.321241 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:51.321251 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:51.321262 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:51.321273 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:51.321283 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:51.321294 | orchestrator | 2026-04-09 00:31:51.321304 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-09 00:31:51.321315 | orchestrator | Thursday 09 April 2026 00:31:34 +0000 (0:00:08.282) 0:04:42.798 ******** 2026-04-09 00:31:51.321326 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:51.321337 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:51.321347 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:51.321358 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:51.321369 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:51.321379 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:51.321390 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:51.321401 | orchestrator | 2026-04-09 00:31:51.321411 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-09 00:31:51.321422 | orchestrator | Thursday 09 April 2026 00:31:43 +0000 (0:00:08.425) 0:04:51.224 ******** 2026-04-09 00:31:51.321433 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:51.321444 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:51.321454 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:51.321465 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:51.321475 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:51.321495 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:51.321509 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:51.321526 | orchestrator | 2026-04-09 00:31:51.321544 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-09 00:31:51.321562 | orchestrator | Thursday 09 April 2026 00:31:45 +0000 (0:00:01.773) 0:04:52.998 ******** 2026-04-09 00:31:51.321582 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:51.321600 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:51.321619 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:51.321632 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:51.321643 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:51.321653 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:51.321664 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:51.321675 | orchestrator | 2026-04-09 00:31:51.321696 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-09 00:32:03.246841 | orchestrator | Thursday 09 April 2026 00:31:51 +0000 (0:00:06.252) 0:04:59.250 ******** 2026-04-09 00:32:03.246945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:03.246965 | orchestrator | 2026-04-09 00:32:03.246978 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-09 00:32:03.246990 | orchestrator | Thursday 09 April 2026 00:31:51 +0000 (0:00:00.392) 0:04:59.643 ******** 2026-04-09 00:32:03.247001 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:03.247014 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:03.247023 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:03.247029 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:03.247036 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:03.247042 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:03.247049 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:03.247055 | orchestrator | 2026-04-09 00:32:03.247062 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-09 00:32:03.247068 | orchestrator | Thursday 09 April 2026 00:31:52 +0000 (0:00:00.727) 0:05:00.370 ******** 2026-04-09 00:32:03.247075 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:03.247082 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:03.247089 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:03.247095 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:03.247101 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:03.247107 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:03.247113 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:03.247120 | orchestrator | 2026-04-09 00:32:03.247126 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-09 00:32:03.247132 | orchestrator | Thursday 09 April 2026 00:31:54 +0000 (0:00:01.955) 0:05:02.326 ******** 2026-04-09 00:32:03.247148 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:03.247155 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:03.247161 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:03.247167 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:03.247174 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:03.247180 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:03.247186 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:03.247192 | orchestrator | 2026-04-09 00:32:03.247199 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-09 00:32:03.247205 | orchestrator | Thursday 09 April 2026 00:31:55 +0000 (0:00:00.807) 0:05:03.134 ******** 2026-04-09 00:32:03.247211 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:03.247217 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:03.247223 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:03.247230 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:03.247236 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:03.247242 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:03.247266 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:03.247273 | orchestrator | 2026-04-09 00:32:03.247279 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-09 00:32:03.247297 | orchestrator | Thursday 09 April 2026 00:31:55 +0000 (0:00:00.250) 0:05:03.384 ******** 2026-04-09 00:32:03.247304 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:03.247310 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:03.247316 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:03.247322 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:03.247329 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:03.247335 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:03.247341 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:03.247347 | orchestrator | 2026-04-09 00:32:03.247353 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-09 00:32:03.247360 | orchestrator | Thursday 09 April 2026 00:31:55 +0000 (0:00:00.384) 0:05:03.768 ******** 2026-04-09 00:32:03.247366 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:03.247372 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:03.247380 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:03.247387 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:03.247395 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:03.247403 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:03.247410 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:03.247417 | orchestrator | 2026-04-09 00:32:03.247425 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-09 00:32:03.247432 | orchestrator | Thursday 09 April 2026 00:31:56 +0000 (0:00:00.388) 0:05:04.157 ******** 2026-04-09 00:32:03.247439 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:03.247446 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:03.247453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:03.247460 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:03.247468 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:03.247475 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:03.247482 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:03.247490 | orchestrator | 2026-04-09 00:32:03.247498 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-09 00:32:03.247506 | orchestrator | Thursday 09 April 2026 00:31:56 +0000 (0:00:00.255) 0:05:04.412 ******** 2026-04-09 00:32:03.247513 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:03.247521 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:03.247528 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:03.247535 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:03.247542 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:03.247550 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:03.247557 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:03.247565 | orchestrator | 2026-04-09 00:32:03.247572 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-09 00:32:03.247580 | orchestrator | Thursday 09 April 2026 00:31:56 +0000 (0:00:00.276) 0:05:04.689 ******** 2026-04-09 00:32:03.247587 | orchestrator | ok: [testbed-manager] =>  2026-04-09 00:32:03.247594 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:32:03.247602 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 00:32:03.247609 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:32:03.247616 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 00:32:03.247624 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:32:03.247631 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 00:32:03.247638 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:32:03.247659 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 00:32:03.247667 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:32:03.247674 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 00:32:03.247681 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:32:03.247689 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 00:32:03.247707 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:32:03.247714 | orchestrator | 2026-04-09 00:32:03.247726 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-09 00:32:03.247739 | orchestrator | Thursday 09 April 2026 00:31:57 +0000 (0:00:00.260) 0:05:04.949 ******** 2026-04-09 00:32:03.247746 | orchestrator | ok: [testbed-manager] =>  2026-04-09 00:32:03.247752 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:32:03.247787 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 00:32:03.247794 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:32:03.247800 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 00:32:03.247807 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:32:03.247813 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 00:32:03.247819 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:32:03.247825 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 00:32:03.247832 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:32:03.247838 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 00:32:03.247844 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:32:03.247850 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 00:32:03.247856 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:32:03.247863 | orchestrator | 2026-04-09 00:32:03.247869 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-09 00:32:03.247875 | orchestrator | Thursday 09 April 2026 00:31:57 +0000 (0:00:00.263) 0:05:05.213 ******** 2026-04-09 00:32:03.247881 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:03.247888 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:03.247894 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:03.247900 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:03.247906 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:03.247912 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:03.247919 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:03.247925 | orchestrator | 2026-04-09 00:32:03.247931 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-09 00:32:03.247937 | orchestrator | Thursday 09 April 2026 00:31:57 +0000 (0:00:00.239) 0:05:05.452 ******** 2026-04-09 00:32:03.247944 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:03.247950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:03.247956 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:03.247962 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:03.247968 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:03.247975 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:03.247981 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:03.247987 | orchestrator | 2026-04-09 00:32:03.247994 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-09 00:32:03.248000 | orchestrator | Thursday 09 April 2026 00:31:57 +0000 (0:00:00.251) 0:05:05.703 ******** 2026-04-09 00:32:03.248012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:03.248020 | orchestrator | 2026-04-09 00:32:03.248026 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-09 00:32:03.248033 | orchestrator | Thursday 09 April 2026 00:31:58 +0000 (0:00:00.394) 0:05:06.098 ******** 2026-04-09 00:32:03.248039 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:03.248045 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:03.248052 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:03.248058 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:03.248064 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:03.248071 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:03.248077 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:03.248083 | orchestrator | 2026-04-09 00:32:03.248089 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-09 00:32:03.248096 | orchestrator | Thursday 09 April 2026 00:31:58 +0000 (0:00:00.840) 0:05:06.938 ******** 2026-04-09 00:32:03.248102 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:03.248113 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:03.248119 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:03.248126 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:03.248132 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:03.248138 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:03.248144 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:03.248150 | orchestrator | 2026-04-09 00:32:03.248157 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-09 00:32:03.248164 | orchestrator | Thursday 09 April 2026 00:32:02 +0000 (0:00:03.842) 0:05:10.781 ******** 2026-04-09 00:32:03.248170 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-09 00:32:03.248178 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-09 00:32:03.248184 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-09 00:32:03.248190 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-09 00:32:03.248196 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-09 00:32:03.248203 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-09 00:32:03.248209 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:03.248215 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-09 00:32:03.248221 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-09 00:32:03.248228 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-09 00:32:03.248234 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:03.248240 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-09 00:32:03.248247 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-09 00:32:03.248253 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-09 00:32:03.248259 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:03.248265 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-09 00:32:03.248276 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-09 00:33:04.888841 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-09 00:33:04.888957 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:04.888974 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-09 00:33:04.888987 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-09 00:33:04.888998 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-09 00:33:04.889009 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:04.889020 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:04.889031 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-09 00:33:04.889042 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-09 00:33:04.889053 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-09 00:33:04.889064 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:04.889075 | orchestrator | 2026-04-09 00:33:04.889088 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-09 00:33:04.889103 | orchestrator | Thursday 09 April 2026 00:32:03 +0000 (0:00:00.681) 0:05:11.463 ******** 2026-04-09 00:33:04.889122 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.889152 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.889173 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.889191 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.889209 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.889225 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.889243 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.889262 | orchestrator | 2026-04-09 00:33:04.889282 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-09 00:33:04.889301 | orchestrator | Thursday 09 April 2026 00:32:10 +0000 (0:00:06.947) 0:05:18.411 ******** 2026-04-09 00:33:04.889319 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.889338 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.889376 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.889390 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.889404 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.889417 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.889431 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.889445 | orchestrator | 2026-04-09 00:33:04.889464 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-09 00:33:04.889483 | orchestrator | Thursday 09 April 2026 00:32:11 +0000 (0:00:01.143) 0:05:19.554 ******** 2026-04-09 00:33:04.889502 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.889520 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.889540 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.889559 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.889579 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.889599 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.889617 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.889634 | orchestrator | 2026-04-09 00:33:04.889649 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-09 00:33:04.889694 | orchestrator | Thursday 09 April 2026 00:32:20 +0000 (0:00:08.399) 0:05:27.954 ******** 2026-04-09 00:33:04.889708 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:04.889722 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.889750 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.889762 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.889773 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.889784 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.889794 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.889805 | orchestrator | 2026-04-09 00:33:04.889816 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-09 00:33:04.889827 | orchestrator | Thursday 09 April 2026 00:32:23 +0000 (0:00:03.298) 0:05:31.252 ******** 2026-04-09 00:33:04.889838 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.889849 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.889859 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.889870 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.889881 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.889893 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.889911 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.889929 | orchestrator | 2026-04-09 00:33:04.889948 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-09 00:33:04.889966 | orchestrator | Thursday 09 April 2026 00:32:24 +0000 (0:00:01.434) 0:05:32.687 ******** 2026-04-09 00:33:04.889984 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.890003 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.890097 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.890110 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.890121 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.890131 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.890142 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.890153 | orchestrator | 2026-04-09 00:33:04.890164 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-09 00:33:04.890175 | orchestrator | Thursday 09 April 2026 00:32:26 +0000 (0:00:01.357) 0:05:34.044 ******** 2026-04-09 00:33:04.890186 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:04.890197 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:04.890208 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:04.890219 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:04.890229 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:04.890240 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:04.890251 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:04.890262 | orchestrator | 2026-04-09 00:33:04.890273 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-09 00:33:04.890284 | orchestrator | Thursday 09 April 2026 00:32:26 +0000 (0:00:00.601) 0:05:34.646 ******** 2026-04-09 00:33:04.890305 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.890316 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.890327 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.890338 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.890349 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.890360 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.890371 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.890382 | orchestrator | 2026-04-09 00:33:04.890393 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-09 00:33:04.890425 | orchestrator | Thursday 09 April 2026 00:32:37 +0000 (0:00:10.354) 0:05:45.001 ******** 2026-04-09 00:33:04.890437 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:04.890448 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.890459 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.890470 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.890481 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.890492 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.890502 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.890513 | orchestrator | 2026-04-09 00:33:04.890524 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-09 00:33:04.890535 | orchestrator | Thursday 09 April 2026 00:32:38 +0000 (0:00:01.142) 0:05:46.144 ******** 2026-04-09 00:33:04.890546 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.890557 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.890568 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.890579 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.890590 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.890600 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.890611 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.890622 | orchestrator | 2026-04-09 00:33:04.890633 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-09 00:33:04.890644 | orchestrator | Thursday 09 April 2026 00:32:47 +0000 (0:00:09.271) 0:05:55.415 ******** 2026-04-09 00:33:04.890675 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.890696 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.890714 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.890733 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.890752 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.890771 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.890782 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.890793 | orchestrator | 2026-04-09 00:33:04.890805 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-09 00:33:04.890816 | orchestrator | Thursday 09 April 2026 00:32:58 +0000 (0:00:10.994) 0:06:06.409 ******** 2026-04-09 00:33:04.890827 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-09 00:33:04.890838 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-09 00:33:04.890849 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-09 00:33:04.890859 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-09 00:33:04.890870 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-09 00:33:04.890881 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-09 00:33:04.890893 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-09 00:33:04.890912 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-09 00:33:04.890930 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-09 00:33:04.890948 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-09 00:33:04.890966 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-09 00:33:04.890985 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-09 00:33:04.891003 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-09 00:33:04.891022 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-09 00:33:04.891045 | orchestrator | 2026-04-09 00:33:04.891056 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-09 00:33:04.891067 | orchestrator | Thursday 09 April 2026 00:32:59 +0000 (0:00:01.231) 0:06:07.640 ******** 2026-04-09 00:33:04.891078 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:04.891089 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:04.891099 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:04.891110 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:04.891121 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:04.891132 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:04.891143 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:04.891153 | orchestrator | 2026-04-09 00:33:04.891164 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-09 00:33:04.891175 | orchestrator | Thursday 09 April 2026 00:33:00 +0000 (0:00:00.643) 0:06:08.284 ******** 2026-04-09 00:33:04.891186 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:04.891197 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:04.891208 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:04.891218 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:04.891229 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:04.891240 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:04.891251 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:04.891261 | orchestrator | 2026-04-09 00:33:04.891272 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-09 00:33:04.891284 | orchestrator | Thursday 09 April 2026 00:33:04 +0000 (0:00:03.778) 0:06:12.063 ******** 2026-04-09 00:33:04.891295 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:04.891306 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:04.891317 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:04.891328 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:04.891338 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:04.891349 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:04.891360 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:04.891370 | orchestrator | 2026-04-09 00:33:04.891382 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-09 00:33:04.891393 | orchestrator | Thursday 09 April 2026 00:33:04 +0000 (0:00:00.484) 0:06:12.547 ******** 2026-04-09 00:33:04.891404 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-09 00:33:04.891414 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-09 00:33:04.891425 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:04.891436 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-09 00:33:04.891447 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-09 00:33:04.891458 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:04.891468 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-09 00:33:04.891479 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-09 00:33:04.891490 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:04.891509 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-09 00:33:23.787736 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-09 00:33:23.787859 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:23.787878 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-09 00:33:23.787891 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-09 00:33:23.787902 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:23.787913 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-09 00:33:23.787977 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-09 00:33:23.787990 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:23.788002 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-09 00:33:23.788013 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-09 00:33:23.788045 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:23.788058 | orchestrator | 2026-04-09 00:33:23.788071 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-09 00:33:23.788083 | orchestrator | Thursday 09 April 2026 00:33:05 +0000 (0:00:00.551) 0:06:13.099 ******** 2026-04-09 00:33:23.788095 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:23.788106 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:23.788116 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:23.788128 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:23.788139 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:23.788150 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:23.788161 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:23.788171 | orchestrator | 2026-04-09 00:33:23.788182 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-09 00:33:23.788194 | orchestrator | Thursday 09 April 2026 00:33:05 +0000 (0:00:00.499) 0:06:13.598 ******** 2026-04-09 00:33:23.788205 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:23.788216 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:23.788229 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:23.788242 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:23.788255 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:23.788268 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:23.788280 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:23.788293 | orchestrator | 2026-04-09 00:33:23.788306 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-09 00:33:23.788319 | orchestrator | Thursday 09 April 2026 00:33:06 +0000 (0:00:00.645) 0:06:14.243 ******** 2026-04-09 00:33:23.788332 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:23.788344 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:23.788357 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:23.788369 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:23.788381 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:23.788393 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:23.788406 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:23.788418 | orchestrator | 2026-04-09 00:33:23.788431 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-09 00:33:23.788448 | orchestrator | Thursday 09 April 2026 00:33:06 +0000 (0:00:00.561) 0:06:14.804 ******** 2026-04-09 00:33:23.788460 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.788474 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:23.788488 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:23.788530 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:23.788556 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:23.788570 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:23.788583 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:23.788594 | orchestrator | 2026-04-09 00:33:23.788605 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-09 00:33:23.788698 | orchestrator | Thursday 09 April 2026 00:33:08 +0000 (0:00:01.690) 0:06:16.495 ******** 2026-04-09 00:33:23.788711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:23.788726 | orchestrator | 2026-04-09 00:33:23.788737 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-09 00:33:23.788747 | orchestrator | Thursday 09 April 2026 00:33:09 +0000 (0:00:00.818) 0:06:17.313 ******** 2026-04-09 00:33:23.788758 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.788770 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:23.788781 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:23.788792 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:23.788803 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:23.788814 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:23.788834 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:23.788845 | orchestrator | 2026-04-09 00:33:23.788856 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-09 00:33:23.788867 | orchestrator | Thursday 09 April 2026 00:33:10 +0000 (0:00:01.067) 0:06:18.381 ******** 2026-04-09 00:33:23.788878 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.788889 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:23.788900 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:23.788911 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:23.788921 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:23.788932 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:23.788943 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:23.788954 | orchestrator | 2026-04-09 00:33:23.788965 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-09 00:33:23.788975 | orchestrator | Thursday 09 April 2026 00:33:11 +0000 (0:00:00.826) 0:06:19.208 ******** 2026-04-09 00:33:23.788986 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.788997 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:23.789008 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:23.789018 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:23.789029 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:23.789040 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:23.789051 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:23.789062 | orchestrator | 2026-04-09 00:33:23.789073 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-09 00:33:23.789102 | orchestrator | Thursday 09 April 2026 00:33:12 +0000 (0:00:01.313) 0:06:20.521 ******** 2026-04-09 00:33:23.789113 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:23.789124 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:23.789135 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:23.789146 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:23.789157 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:23.789168 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:23.789179 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:23.789189 | orchestrator | 2026-04-09 00:33:23.789200 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-09 00:33:23.789211 | orchestrator | Thursday 09 April 2026 00:33:13 +0000 (0:00:01.343) 0:06:21.864 ******** 2026-04-09 00:33:23.789222 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.789233 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:23.789244 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:23.789255 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:23.789266 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:23.789277 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:23.789288 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:23.789298 | orchestrator | 2026-04-09 00:33:23.789309 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-09 00:33:23.789320 | orchestrator | Thursday 09 April 2026 00:33:15 +0000 (0:00:01.296) 0:06:23.160 ******** 2026-04-09 00:33:23.789331 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:23.789342 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:23.789353 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:23.789363 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:23.789374 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:23.789385 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:23.789396 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:23.789407 | orchestrator | 2026-04-09 00:33:23.789418 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-09 00:33:23.789429 | orchestrator | Thursday 09 April 2026 00:33:16 +0000 (0:00:01.559) 0:06:24.720 ******** 2026-04-09 00:33:23.789440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:23.789464 | orchestrator | 2026-04-09 00:33:23.789476 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-09 00:33:23.789487 | orchestrator | Thursday 09 April 2026 00:33:17 +0000 (0:00:00.838) 0:06:25.559 ******** 2026-04-09 00:33:23.789498 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.789509 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:23.789520 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:23.789530 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:23.789541 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:23.789552 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:23.789563 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:23.789573 | orchestrator | 2026-04-09 00:33:23.789584 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-09 00:33:23.789595 | orchestrator | Thursday 09 April 2026 00:33:18 +0000 (0:00:01.320) 0:06:26.880 ******** 2026-04-09 00:33:23.789606 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.789635 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:23.789646 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:23.789657 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:23.789668 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:23.789679 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:23.789690 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:23.789701 | orchestrator | 2026-04-09 00:33:23.789712 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-09 00:33:23.789723 | orchestrator | Thursday 09 April 2026 00:33:20 +0000 (0:00:01.284) 0:06:28.164 ******** 2026-04-09 00:33:23.789733 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.789744 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:23.789755 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:23.789766 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:23.789777 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:23.789788 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:23.789799 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:23.789810 | orchestrator | 2026-04-09 00:33:23.789821 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-09 00:33:23.789832 | orchestrator | Thursday 09 April 2026 00:33:21 +0000 (0:00:01.165) 0:06:29.330 ******** 2026-04-09 00:33:23.789843 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:23.789889 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:23.789901 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:23.789912 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:23.789923 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:23.789934 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:23.789944 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:23.789955 | orchestrator | 2026-04-09 00:33:23.789966 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-09 00:33:23.789977 | orchestrator | Thursday 09 April 2026 00:33:22 +0000 (0:00:01.301) 0:06:30.631 ******** 2026-04-09 00:33:23.789988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:23.789999 | orchestrator | 2026-04-09 00:33:23.790010 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:23.790080 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.823) 0:06:31.455 ******** 2026-04-09 00:33:23.790093 | orchestrator | 2026-04-09 00:33:23.790104 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:23.790115 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.039) 0:06:31.494 ******** 2026-04-09 00:33:23.790126 | orchestrator | 2026-04-09 00:33:23.790137 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:23.790148 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.188) 0:06:31.683 ******** 2026-04-09 00:33:23.790158 | orchestrator | 2026-04-09 00:33:23.790169 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:23.790196 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.038) 0:06:31.722 ******** 2026-04-09 00:33:49.964751 | orchestrator | 2026-04-09 00:33:49.964862 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:49.964877 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.038) 0:06:31.761 ******** 2026-04-09 00:33:49.964888 | orchestrator | 2026-04-09 00:33:49.964899 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:49.964909 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.045) 0:06:31.806 ******** 2026-04-09 00:33:49.964919 | orchestrator | 2026-04-09 00:33:49.964929 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:49.964939 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.038) 0:06:31.845 ******** 2026-04-09 00:33:49.964949 | orchestrator | 2026-04-09 00:33:49.964958 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:33:49.964968 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:00.039) 0:06:31.885 ******** 2026-04-09 00:33:49.964979 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:49.964990 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:49.965000 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:49.965010 | orchestrator | 2026-04-09 00:33:49.965020 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-09 00:33:49.965030 | orchestrator | Thursday 09 April 2026 00:33:25 +0000 (0:00:01.278) 0:06:33.163 ******** 2026-04-09 00:33:49.965040 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:49.965051 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:49.965061 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:49.965070 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:49.965081 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:49.965091 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:49.965101 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:49.965111 | orchestrator | 2026-04-09 00:33:49.965121 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-09 00:33:49.965131 | orchestrator | Thursday 09 April 2026 00:33:26 +0000 (0:00:01.267) 0:06:34.430 ******** 2026-04-09 00:33:49.965141 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:49.965150 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:49.965160 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:49.965170 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:49.965180 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:49.965190 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:49.965199 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:49.965209 | orchestrator | 2026-04-09 00:33:49.965219 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-09 00:33:49.965229 | orchestrator | Thursday 09 April 2026 00:33:27 +0000 (0:00:01.304) 0:06:35.735 ******** 2026-04-09 00:33:49.965239 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:49.965248 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:49.965258 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:49.965268 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:49.965278 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:49.965288 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:49.965298 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:49.965309 | orchestrator | 2026-04-09 00:33:49.965329 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-09 00:33:49.965341 | orchestrator | Thursday 09 April 2026 00:33:30 +0000 (0:00:02.331) 0:06:38.066 ******** 2026-04-09 00:33:49.965353 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:49.965365 | orchestrator | 2026-04-09 00:33:49.965376 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-09 00:33:49.965388 | orchestrator | Thursday 09 April 2026 00:33:30 +0000 (0:00:00.108) 0:06:38.175 ******** 2026-04-09 00:33:49.965417 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:49.965429 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:49.965441 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:49.965453 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:49.965464 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:49.965476 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:49.965488 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:49.965499 | orchestrator | 2026-04-09 00:33:49.965511 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-09 00:33:49.965523 | orchestrator | Thursday 09 April 2026 00:33:31 +0000 (0:00:01.287) 0:06:39.462 ******** 2026-04-09 00:33:49.965534 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:49.965546 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:49.965557 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:49.965641 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:49.965663 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:49.965681 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:49.965697 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:49.965714 | orchestrator | 2026-04-09 00:33:49.965730 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-09 00:33:49.965746 | orchestrator | Thursday 09 April 2026 00:33:32 +0000 (0:00:00.531) 0:06:39.994 ******** 2026-04-09 00:33:49.965763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:49.965784 | orchestrator | 2026-04-09 00:33:49.965801 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-09 00:33:49.965818 | orchestrator | Thursday 09 April 2026 00:33:32 +0000 (0:00:00.865) 0:06:40.859 ******** 2026-04-09 00:33:49.965831 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:49.965841 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:49.965850 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:49.965860 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:49.965870 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:49.965880 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:49.965889 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:49.965899 | orchestrator | 2026-04-09 00:33:49.965909 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-09 00:33:49.965919 | orchestrator | Thursday 09 April 2026 00:33:33 +0000 (0:00:00.974) 0:06:41.833 ******** 2026-04-09 00:33:49.965928 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-09 00:33:49.965956 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-09 00:33:49.965967 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-09 00:33:49.965977 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-09 00:33:49.965986 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-09 00:33:49.965996 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-09 00:33:49.966006 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-09 00:33:49.966096 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-09 00:33:49.966110 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-09 00:33:49.966120 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-09 00:33:49.966129 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-09 00:33:49.966139 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-09 00:33:49.966149 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-09 00:33:49.966158 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-09 00:33:49.966169 | orchestrator | 2026-04-09 00:33:49.966181 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-09 00:33:49.966191 | orchestrator | Thursday 09 April 2026 00:33:36 +0000 (0:00:02.475) 0:06:44.309 ******** 2026-04-09 00:33:49.966214 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:49.966225 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:49.966236 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:49.966247 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:49.966258 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:49.966269 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:49.966280 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:49.966291 | orchestrator | 2026-04-09 00:33:49.966302 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-09 00:33:49.966314 | orchestrator | Thursday 09 April 2026 00:33:36 +0000 (0:00:00.502) 0:06:44.811 ******** 2026-04-09 00:33:49.966327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:49.966339 | orchestrator | 2026-04-09 00:33:49.966350 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-09 00:33:49.966361 | orchestrator | Thursday 09 April 2026 00:33:37 +0000 (0:00:00.900) 0:06:45.712 ******** 2026-04-09 00:33:49.966372 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:49.966383 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:49.966394 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:49.966405 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:49.966416 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:49.966427 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:49.966438 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:49.966449 | orchestrator | 2026-04-09 00:33:49.966467 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-09 00:33:49.966478 | orchestrator | Thursday 09 April 2026 00:33:38 +0000 (0:00:00.848) 0:06:46.560 ******** 2026-04-09 00:33:49.966489 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:49.966500 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:49.966511 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:49.966522 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:49.966533 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:49.966543 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:49.966554 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:49.966565 | orchestrator | 2026-04-09 00:33:49.966616 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-09 00:33:49.966628 | orchestrator | Thursday 09 April 2026 00:33:39 +0000 (0:00:00.930) 0:06:47.491 ******** 2026-04-09 00:33:49.966639 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:49.966650 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:49.966661 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:49.966672 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:49.966683 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:49.966694 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:49.966705 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:49.966716 | orchestrator | 2026-04-09 00:33:49.966727 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-09 00:33:49.966738 | orchestrator | Thursday 09 April 2026 00:33:40 +0000 (0:00:00.534) 0:06:48.026 ******** 2026-04-09 00:33:49.966749 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:49.966760 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:49.966771 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:49.966782 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:49.966793 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:49.966804 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:49.966815 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:49.966826 | orchestrator | 2026-04-09 00:33:49.966836 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-09 00:33:49.966847 | orchestrator | Thursday 09 April 2026 00:33:41 +0000 (0:00:01.674) 0:06:49.700 ******** 2026-04-09 00:33:49.966858 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:49.966876 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:49.966888 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:49.966899 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:49.966910 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:49.966920 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:49.966931 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:49.966942 | orchestrator | 2026-04-09 00:33:49.966953 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-09 00:33:49.966964 | orchestrator | Thursday 09 April 2026 00:33:42 +0000 (0:00:00.653) 0:06:50.354 ******** 2026-04-09 00:33:49.966975 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:49.966986 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:49.966997 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:49.967008 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:49.967019 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:49.967030 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:49.967050 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:23.090243 | orchestrator | 2026-04-09 00:34:23.090344 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-09 00:34:23.090356 | orchestrator | Thursday 09 April 2026 00:33:50 +0000 (0:00:07.613) 0:06:57.967 ******** 2026-04-09 00:34:23.090365 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.090374 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:23.090383 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:23.090390 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:23.090398 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:23.090405 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:23.090417 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:23.090430 | orchestrator | 2026-04-09 00:34:23.090442 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-09 00:34:23.090454 | orchestrator | Thursday 09 April 2026 00:33:51 +0000 (0:00:01.341) 0:06:59.309 ******** 2026-04-09 00:34:23.090467 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.090479 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:23.090491 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:23.090503 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:23.090514 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:23.090554 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:23.090567 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:23.090579 | orchestrator | 2026-04-09 00:34:23.090591 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-09 00:34:23.090603 | orchestrator | Thursday 09 April 2026 00:33:53 +0000 (0:00:01.811) 0:07:01.120 ******** 2026-04-09 00:34:23.090616 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.090628 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:23.090640 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:23.090653 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:23.090663 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:23.090671 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:23.090679 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:23.090686 | orchestrator | 2026-04-09 00:34:23.090693 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:34:23.090701 | orchestrator | Thursday 09 April 2026 00:33:54 +0000 (0:00:01.816) 0:07:02.936 ******** 2026-04-09 00:34:23.090709 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.090716 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.090724 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.090731 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.090738 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.090745 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.090753 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.090760 | orchestrator | 2026-04-09 00:34:23.090767 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:34:23.090796 | orchestrator | Thursday 09 April 2026 00:33:55 +0000 (0:00:00.890) 0:07:03.827 ******** 2026-04-09 00:34:23.090806 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:23.090815 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:23.090823 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:23.090832 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:23.090841 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:23.090850 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:23.090858 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:23.090867 | orchestrator | 2026-04-09 00:34:23.090876 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-09 00:34:23.090886 | orchestrator | Thursday 09 April 2026 00:33:56 +0000 (0:00:00.788) 0:07:04.615 ******** 2026-04-09 00:34:23.090894 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:23.090903 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:23.090911 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:23.090920 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:23.090929 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:23.090938 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:23.090946 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:23.090954 | orchestrator | 2026-04-09 00:34:23.090963 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-09 00:34:23.090971 | orchestrator | Thursday 09 April 2026 00:33:57 +0000 (0:00:00.636) 0:07:05.252 ******** 2026-04-09 00:34:23.090980 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.090989 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.090998 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.091007 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.091016 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.091025 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.091034 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.091042 | orchestrator | 2026-04-09 00:34:23.091051 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-09 00:34:23.091060 | orchestrator | Thursday 09 April 2026 00:33:57 +0000 (0:00:00.521) 0:07:05.773 ******** 2026-04-09 00:34:23.091068 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.091077 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.091085 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.091093 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.091102 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.091111 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.091119 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.091128 | orchestrator | 2026-04-09 00:34:23.091137 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-09 00:34:23.091145 | orchestrator | Thursday 09 April 2026 00:33:58 +0000 (0:00:00.498) 0:07:06.271 ******** 2026-04-09 00:34:23.091155 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.091162 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.091169 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.091176 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.091183 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.091190 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.091197 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.091204 | orchestrator | 2026-04-09 00:34:23.091212 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-09 00:34:23.091219 | orchestrator | Thursday 09 April 2026 00:33:58 +0000 (0:00:00.494) 0:07:06.766 ******** 2026-04-09 00:34:23.091226 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.091233 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.091240 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.091247 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.091254 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.091262 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.091269 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.091276 | orchestrator | 2026-04-09 00:34:23.091299 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-09 00:34:23.091313 | orchestrator | Thursday 09 April 2026 00:34:04 +0000 (0:00:05.535) 0:07:12.302 ******** 2026-04-09 00:34:23.091320 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:23.091328 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:23.091335 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:23.091342 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:23.091349 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:23.091357 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:23.091364 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:23.091371 | orchestrator | 2026-04-09 00:34:23.091378 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-09 00:34:23.091386 | orchestrator | Thursday 09 April 2026 00:34:05 +0000 (0:00:00.727) 0:07:13.029 ******** 2026-04-09 00:34:23.091395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:23.091404 | orchestrator | 2026-04-09 00:34:23.091412 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-09 00:34:23.091419 | orchestrator | Thursday 09 April 2026 00:34:05 +0000 (0:00:00.883) 0:07:13.913 ******** 2026-04-09 00:34:23.091426 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.091433 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.091441 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.091448 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.091455 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.091477 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.091485 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.091492 | orchestrator | 2026-04-09 00:34:23.091499 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-09 00:34:23.091506 | orchestrator | Thursday 09 April 2026 00:34:08 +0000 (0:00:02.178) 0:07:16.092 ******** 2026-04-09 00:34:23.091513 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.091535 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.091543 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.091550 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.091557 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.091565 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.091572 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.091579 | orchestrator | 2026-04-09 00:34:23.091586 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-09 00:34:23.091594 | orchestrator | Thursday 09 April 2026 00:34:09 +0000 (0:00:01.436) 0:07:17.529 ******** 2026-04-09 00:34:23.091601 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:23.091608 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:23.091615 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:23.091622 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:23.091630 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:23.091637 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:23.091644 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:23.091651 | orchestrator | 2026-04-09 00:34:23.091659 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-09 00:34:23.091670 | orchestrator | Thursday 09 April 2026 00:34:10 +0000 (0:00:00.866) 0:07:18.395 ******** 2026-04-09 00:34:23.091678 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:23.091687 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:23.091694 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:23.091702 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:23.091714 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:23.091722 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:23.091729 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:23.091737 | orchestrator | 2026-04-09 00:34:23.091744 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-09 00:34:23.091751 | orchestrator | Thursday 09 April 2026 00:34:12 +0000 (0:00:01.746) 0:07:20.141 ******** 2026-04-09 00:34:23.091759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:23.091767 | orchestrator | 2026-04-09 00:34:23.091774 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-09 00:34:23.091781 | orchestrator | Thursday 09 April 2026 00:34:13 +0000 (0:00:00.954) 0:07:21.096 ******** 2026-04-09 00:34:23.091788 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:23.091796 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:23.091803 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:23.091810 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:23.091817 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:23.091825 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:23.091832 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:23.091839 | orchestrator | 2026-04-09 00:34:23.091851 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-09 00:34:51.779400 | orchestrator | Thursday 09 April 2026 00:34:23 +0000 (0:00:09.930) 0:07:31.027 ******** 2026-04-09 00:34:51.779580 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:51.779599 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:51.779612 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:51.779623 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:51.779634 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:51.779645 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:51.779656 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:51.779668 | orchestrator | 2026-04-09 00:34:51.779680 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-09 00:34:51.779691 | orchestrator | Thursday 09 April 2026 00:34:24 +0000 (0:00:01.594) 0:07:32.621 ******** 2026-04-09 00:34:51.779702 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:51.779713 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:51.779724 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:51.779735 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:51.779747 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:51.779757 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:51.779768 | orchestrator | 2026-04-09 00:34:51.779779 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-09 00:34:51.779790 | orchestrator | Thursday 09 April 2026 00:34:26 +0000 (0:00:01.404) 0:07:34.026 ******** 2026-04-09 00:34:51.779801 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.779813 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.779824 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.779835 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.779846 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.779857 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.779868 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.779878 | orchestrator | 2026-04-09 00:34:51.779889 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-09 00:34:51.779900 | orchestrator | 2026-04-09 00:34:51.779911 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-09 00:34:51.779946 | orchestrator | Thursday 09 April 2026 00:34:27 +0000 (0:00:01.158) 0:07:35.185 ******** 2026-04-09 00:34:51.779960 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:51.779974 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:51.779987 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:51.779999 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:51.780013 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:51.780026 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:51.780038 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:51.780051 | orchestrator | 2026-04-09 00:34:51.780064 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-09 00:34:51.780077 | orchestrator | 2026-04-09 00:34:51.780089 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-09 00:34:51.780101 | orchestrator | Thursday 09 April 2026 00:34:27 +0000 (0:00:00.423) 0:07:35.609 ******** 2026-04-09 00:34:51.780115 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.780127 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.780141 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.780154 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.780167 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.780193 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.780205 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.780218 | orchestrator | 2026-04-09 00:34:51.780232 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-09 00:34:51.780245 | orchestrator | Thursday 09 April 2026 00:34:28 +0000 (0:00:01.261) 0:07:36.870 ******** 2026-04-09 00:34:51.780257 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:51.780270 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:51.780281 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:51.780292 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:51.780303 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:51.780313 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:51.780324 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:51.780335 | orchestrator | 2026-04-09 00:34:51.780346 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-09 00:34:51.780356 | orchestrator | Thursday 09 April 2026 00:34:30 +0000 (0:00:01.406) 0:07:38.277 ******** 2026-04-09 00:34:51.780367 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:51.780378 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:51.780389 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:51.780399 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:51.780410 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:51.780421 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:51.780432 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:51.780442 | orchestrator | 2026-04-09 00:34:51.780453 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-09 00:34:51.780464 | orchestrator | Thursday 09 April 2026 00:34:30 +0000 (0:00:00.412) 0:07:38.690 ******** 2026-04-09 00:34:51.780496 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:51.780508 | orchestrator | 2026-04-09 00:34:51.780519 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-09 00:34:51.780530 | orchestrator | Thursday 09 April 2026 00:34:31 +0000 (0:00:00.702) 0:07:39.392 ******** 2026-04-09 00:34:51.780543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:51.780556 | orchestrator | 2026-04-09 00:34:51.780567 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-09 00:34:51.780578 | orchestrator | Thursday 09 April 2026 00:34:32 +0000 (0:00:00.816) 0:07:40.209 ******** 2026-04-09 00:34:51.780588 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.780606 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.780617 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.780628 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.780639 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.780649 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.780660 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.780671 | orchestrator | 2026-04-09 00:34:51.780698 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-09 00:34:51.780710 | orchestrator | Thursday 09 April 2026 00:34:41 +0000 (0:00:09.041) 0:07:49.250 ******** 2026-04-09 00:34:51.780721 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.780733 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.780743 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.780754 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.780765 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.780776 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.780787 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.780798 | orchestrator | 2026-04-09 00:34:51.780808 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-09 00:34:51.780820 | orchestrator | Thursday 09 April 2026 00:34:42 +0000 (0:00:00.747) 0:07:49.998 ******** 2026-04-09 00:34:51.780831 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.780842 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.780852 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.780863 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.780874 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.780885 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.780895 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.780906 | orchestrator | 2026-04-09 00:34:51.780917 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-09 00:34:51.780928 | orchestrator | Thursday 09 April 2026 00:34:43 +0000 (0:00:01.298) 0:07:51.296 ******** 2026-04-09 00:34:51.780939 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.780950 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.780960 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.780971 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.780982 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.780992 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.781003 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.781014 | orchestrator | 2026-04-09 00:34:51.781025 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-09 00:34:51.781035 | orchestrator | Thursday 09 April 2026 00:34:45 +0000 (0:00:01.695) 0:07:52.992 ******** 2026-04-09 00:34:51.781046 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.781057 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.781068 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.781079 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.781089 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.781100 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.781111 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.781121 | orchestrator | 2026-04-09 00:34:51.781132 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-09 00:34:51.781143 | orchestrator | Thursday 09 April 2026 00:34:46 +0000 (0:00:01.172) 0:07:54.164 ******** 2026-04-09 00:34:51.781154 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.781165 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.781176 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.781187 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.781197 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.781208 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.781224 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.781236 | orchestrator | 2026-04-09 00:34:51.781247 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-09 00:34:51.781266 | orchestrator | 2026-04-09 00:34:51.781277 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-09 00:34:51.781288 | orchestrator | Thursday 09 April 2026 00:34:47 +0000 (0:00:01.029) 0:07:55.193 ******** 2026-04-09 00:34:51.781299 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:51.781310 | orchestrator | 2026-04-09 00:34:51.781321 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 00:34:51.781331 | orchestrator | Thursday 09 April 2026 00:34:48 +0000 (0:00:00.852) 0:07:56.046 ******** 2026-04-09 00:34:51.781342 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:51.781353 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:51.781364 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:51.781375 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:51.781386 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:51.781396 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:51.781407 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:51.781418 | orchestrator | 2026-04-09 00:34:51.781429 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 00:34:51.781440 | orchestrator | Thursday 09 April 2026 00:34:48 +0000 (0:00:00.794) 0:07:56.840 ******** 2026-04-09 00:34:51.781451 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:51.781462 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:51.781488 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:51.781500 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:51.781510 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:51.781521 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:51.781532 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:51.781543 | orchestrator | 2026-04-09 00:34:51.781554 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-09 00:34:51.781564 | orchestrator | Thursday 09 April 2026 00:34:50 +0000 (0:00:01.257) 0:07:58.098 ******** 2026-04-09 00:34:51.781575 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:51.781586 | orchestrator | 2026-04-09 00:34:51.781597 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 00:34:51.781607 | orchestrator | Thursday 09 April 2026 00:34:50 +0000 (0:00:00.800) 0:07:58.899 ******** 2026-04-09 00:34:51.781618 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:51.781629 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:51.781640 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:51.781651 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:51.781661 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:51.781672 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:51.781683 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:51.781693 | orchestrator | 2026-04-09 00:34:51.781711 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 00:34:53.243335 | orchestrator | Thursday 09 April 2026 00:34:51 +0000 (0:00:00.813) 0:07:59.713 ******** 2026-04-09 00:34:53.243456 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:53.243541 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:53.243555 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:53.243567 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:53.243578 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:53.243589 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:53.243601 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:53.243612 | orchestrator | 2026-04-09 00:34:53.243625 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:34:53.243637 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-09 00:34:53.243650 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:53.243689 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 00:34:53.243701 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 00:34:53.243712 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:53.243723 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:53.243734 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:53.243745 | orchestrator | 2026-04-09 00:34:53.243756 | orchestrator | 2026-04-09 00:34:53.243767 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:34:53.243778 | orchestrator | Thursday 09 April 2026 00:34:52 +0000 (0:00:01.190) 0:08:00.903 ******** 2026-04-09 00:34:53.243789 | orchestrator | =============================================================================== 2026-04-09 00:34:53.243800 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.85s 2026-04-09 00:34:53.243810 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.34s 2026-04-09 00:34:53.243827 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.09s 2026-04-09 00:34:53.243869 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.45s 2026-04-09 00:34:53.243897 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.25s 2026-04-09 00:34:53.243917 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.99s 2026-04-09 00:34:53.243936 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.55s 2026-04-09 00:34:53.243956 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.36s 2026-04-09 00:34:53.243976 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.93s 2026-04-09 00:34:53.243995 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.27s 2026-04-09 00:34:53.244017 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.04s 2026-04-09 00:34:53.244037 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.68s 2026-04-09 00:34:53.244057 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.43s 2026-04-09 00:34:53.244071 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.40s 2026-04-09 00:34:53.244085 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.28s 2026-04-09 00:34:53.244098 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.61s 2026-04-09 00:34:53.244109 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.95s 2026-04-09 00:34:53.244120 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.25s 2026-04-09 00:34:53.244131 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.54s 2026-04-09 00:34:53.244141 | orchestrator | osism.commons.services : Populate service facts ------------------------- 4.93s 2026-04-09 00:34:53.402987 | orchestrator | + osism apply fail2ban 2026-04-09 00:35:05.030274 | orchestrator | 2026-04-09 00:35:05 | INFO  | Prepare task for execution of fail2ban. 2026-04-09 00:35:05.110005 | orchestrator | 2026-04-09 00:35:05 | INFO  | Task cda38747-7ac6-460d-a365-996fd4f23611 (fail2ban) was prepared for execution. 2026-04-09 00:35:05.110123 | orchestrator | 2026-04-09 00:35:05 | INFO  | It takes a moment until task cda38747-7ac6-460d-a365-996fd4f23611 (fail2ban) has been started and output is visible here. 2026-04-09 00:35:25.431135 | orchestrator | 2026-04-09 00:35:25.431224 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-09 00:35:25.431235 | orchestrator | 2026-04-09 00:35:25.431243 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-09 00:35:25.431250 | orchestrator | Thursday 09 April 2026 00:35:08 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-04-09 00:35:25.431258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:35:25.431267 | orchestrator | 2026-04-09 00:35:25.431273 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-09 00:35:25.431280 | orchestrator | Thursday 09 April 2026 00:35:09 +0000 (0:00:01.154) 0:00:01.493 ******** 2026-04-09 00:35:25.431286 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:25.431295 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:25.431301 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:25.431307 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:25.431313 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:25.431320 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:25.431326 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:25.431332 | orchestrator | 2026-04-09 00:35:25.431338 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-09 00:35:25.431345 | orchestrator | Thursday 09 April 2026 00:35:20 +0000 (0:00:10.951) 0:00:12.445 ******** 2026-04-09 00:35:25.431351 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:25.431357 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:25.431363 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:25.431370 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:25.431376 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:25.431382 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:25.431389 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:25.431395 | orchestrator | 2026-04-09 00:35:25.431401 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-09 00:35:25.431407 | orchestrator | Thursday 09 April 2026 00:35:22 +0000 (0:00:01.554) 0:00:13.999 ******** 2026-04-09 00:35:25.431414 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:25.431421 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:25.431429 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:25.431481 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:25.431493 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:25.431504 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:25.431514 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:25.431525 | orchestrator | 2026-04-09 00:35:25.431535 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-09 00:35:25.431545 | orchestrator | Thursday 09 April 2026 00:35:23 +0000 (0:00:01.244) 0:00:15.244 ******** 2026-04-09 00:35:25.431553 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:25.431560 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:25.431566 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:25.431572 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:25.431579 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:25.431585 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:25.431591 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:25.431597 | orchestrator | 2026-04-09 00:35:25.431603 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:35:25.431624 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:25.431632 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:25.431655 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:25.431661 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:25.431668 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:25.431674 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:25.431680 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:25.431686 | orchestrator | 2026-04-09 00:35:25.431693 | orchestrator | 2026-04-09 00:35:25.431700 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:35:25.431708 | orchestrator | Thursday 09 April 2026 00:35:25 +0000 (0:00:01.591) 0:00:16.836 ******** 2026-04-09 00:35:25.431715 | orchestrator | =============================================================================== 2026-04-09 00:35:25.431722 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.95s 2026-04-09 00:35:25.431729 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.59s 2026-04-09 00:35:25.431737 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.55s 2026-04-09 00:35:25.431744 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.24s 2026-04-09 00:35:25.431751 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-04-09 00:35:25.598713 | orchestrator | + osism apply network 2026-04-09 00:35:36.902813 | orchestrator | 2026-04-09 00:35:36 | INFO  | Prepare task for execution of network. 2026-04-09 00:35:36.970229 | orchestrator | 2026-04-09 00:35:36 | INFO  | Task c9fcd99a-4404-41e6-b9dc-cf98ddab8e71 (network) was prepared for execution. 2026-04-09 00:35:36.970324 | orchestrator | 2026-04-09 00:35:36 | INFO  | It takes a moment until task c9fcd99a-4404-41e6-b9dc-cf98ddab8e71 (network) has been started and output is visible here. 2026-04-09 00:36:04.377944 | orchestrator | 2026-04-09 00:36:04.378117 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-09 00:36:04.378138 | orchestrator | 2026-04-09 00:36:04.378151 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-09 00:36:04.378163 | orchestrator | Thursday 09 April 2026 00:35:40 +0000 (0:00:00.331) 0:00:00.331 ******** 2026-04-09 00:36:04.378175 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.378189 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:04.378201 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:04.378212 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:04.378223 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:04.378234 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:04.378245 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:04.378256 | orchestrator | 2026-04-09 00:36:04.378268 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-09 00:36:04.378279 | orchestrator | Thursday 09 April 2026 00:35:40 +0000 (0:00:00.618) 0:00:00.950 ******** 2026-04-09 00:36:04.378292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:04.378306 | orchestrator | 2026-04-09 00:36:04.378317 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-09 00:36:04.378328 | orchestrator | Thursday 09 April 2026 00:35:41 +0000 (0:00:01.123) 0:00:02.074 ******** 2026-04-09 00:36:04.378339 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.378350 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:04.378385 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:04.378422 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:04.378434 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:04.378445 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:04.378456 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:04.378470 | orchestrator | 2026-04-09 00:36:04.378483 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-09 00:36:04.378497 | orchestrator | Thursday 09 April 2026 00:35:44 +0000 (0:00:02.720) 0:00:04.794 ******** 2026-04-09 00:36:04.378509 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.378522 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:04.378536 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:04.378549 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:04.378562 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:04.378575 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:04.378589 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:04.378602 | orchestrator | 2026-04-09 00:36:04.378615 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-09 00:36:04.378629 | orchestrator | Thursday 09 April 2026 00:35:46 +0000 (0:00:01.642) 0:00:06.436 ******** 2026-04-09 00:36:04.378642 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-09 00:36:04.378656 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-09 00:36:04.378670 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-09 00:36:04.378683 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-09 00:36:04.378697 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-09 00:36:04.378710 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-09 00:36:04.378725 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-09 00:36:04.378738 | orchestrator | 2026-04-09 00:36:04.378751 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-09 00:36:04.378765 | orchestrator | Thursday 09 April 2026 00:35:47 +0000 (0:00:01.179) 0:00:07.615 ******** 2026-04-09 00:36:04.378779 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:04.378793 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:04.378806 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:04.378819 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:04.378833 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:04.378844 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:04.378855 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:04.378866 | orchestrator | 2026-04-09 00:36:04.378877 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-09 00:36:04.378889 | orchestrator | Thursday 09 April 2026 00:35:48 +0000 (0:00:00.620) 0:00:08.236 ******** 2026-04-09 00:36:04.378900 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:04.378911 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:04.378922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:04.378933 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:04.378944 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:04.378955 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:04.378966 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:04.378976 | orchestrator | 2026-04-09 00:36:04.378988 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-09 00:36:04.378999 | orchestrator | Thursday 09 April 2026 00:35:48 +0000 (0:00:00.764) 0:00:09.001 ******** 2026-04-09 00:36:04.379010 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:04.379020 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:04.379031 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:04.379042 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:04.379053 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:04.379064 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:04.379074 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:04.379085 | orchestrator | 2026-04-09 00:36:04.379104 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-09 00:36:04.379116 | orchestrator | Thursday 09 April 2026 00:35:49 +0000 (0:00:00.756) 0:00:09.758 ******** 2026-04-09 00:36:04.379127 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:36:04.379138 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:36:04.379148 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 00:36:04.379159 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:36:04.379170 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:36:04.379181 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 00:36:04.379192 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 00:36:04.379203 | orchestrator | 2026-04-09 00:36:04.379232 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-09 00:36:04.379244 | orchestrator | Thursday 09 April 2026 00:35:52 +0000 (0:00:03.048) 0:00:12.807 ******** 2026-04-09 00:36:04.379255 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:04.379267 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:36:04.379277 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:36:04.379288 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:36:04.379299 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:36:04.379310 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:36:04.379321 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:36:04.379332 | orchestrator | 2026-04-09 00:36:04.379343 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-09 00:36:04.379372 | orchestrator | Thursday 09 April 2026 00:35:54 +0000 (0:00:01.448) 0:00:14.255 ******** 2026-04-09 00:36:04.379384 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:36:04.379457 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:36:04.379469 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:36:04.379481 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 00:36:04.379492 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 00:36:04.379503 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 00:36:04.379514 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:36:04.379525 | orchestrator | 2026-04-09 00:36:04.379536 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-09 00:36:04.379548 | orchestrator | Thursday 09 April 2026 00:35:55 +0000 (0:00:01.558) 0:00:15.814 ******** 2026-04-09 00:36:04.379559 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.379570 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:04.379581 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:04.379592 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:04.379603 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:04.379614 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:04.379626 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:04.379637 | orchestrator | 2026-04-09 00:36:04.379648 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-09 00:36:04.379659 | orchestrator | Thursday 09 April 2026 00:35:56 +0000 (0:00:00.967) 0:00:16.781 ******** 2026-04-09 00:36:04.379670 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:04.379681 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:04.379692 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:04.379704 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:04.379715 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:04.379726 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:04.379737 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:04.379748 | orchestrator | 2026-04-09 00:36:04.379759 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-09 00:36:04.379770 | orchestrator | Thursday 09 April 2026 00:35:57 +0000 (0:00:00.555) 0:00:17.337 ******** 2026-04-09 00:36:04.379781 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.379792 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:04.379803 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:04.379814 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:04.379833 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:04.379844 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:04.379861 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:04.379872 | orchestrator | 2026-04-09 00:36:04.379884 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-09 00:36:04.379895 | orchestrator | Thursday 09 April 2026 00:35:59 +0000 (0:00:02.353) 0:00:19.690 ******** 2026-04-09 00:36:04.379906 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:04.379917 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:04.379928 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:04.379939 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:04.379950 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:04.379962 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:04.379973 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-09 00:36:04.379985 | orchestrator | 2026-04-09 00:36:04.379996 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-09 00:36:04.380007 | orchestrator | Thursday 09 April 2026 00:36:00 +0000 (0:00:00.808) 0:00:20.499 ******** 2026-04-09 00:36:04.380018 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.380029 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:36:04.380040 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:36:04.380051 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:36:04.380062 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:36:04.380073 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:36:04.380084 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:36:04.380095 | orchestrator | 2026-04-09 00:36:04.380106 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-09 00:36:04.380117 | orchestrator | Thursday 09 April 2026 00:36:01 +0000 (0:00:01.498) 0:00:21.997 ******** 2026-04-09 00:36:04.380129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:04.380142 | orchestrator | 2026-04-09 00:36:04.380153 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 00:36:04.380164 | orchestrator | Thursday 09 April 2026 00:36:02 +0000 (0:00:01.057) 0:00:23.054 ******** 2026-04-09 00:36:04.380175 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.380186 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:04.380197 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:04.380208 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:04.380219 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:04.380230 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:04.380241 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:04.380252 | orchestrator | 2026-04-09 00:36:04.380263 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-09 00:36:04.380274 | orchestrator | Thursday 09 April 2026 00:36:03 +0000 (0:00:01.036) 0:00:24.091 ******** 2026-04-09 00:36:04.380285 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:04.380296 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:04.380307 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:04.380318 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:04.380330 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:04.380349 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:19.933930 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:19.934067 | orchestrator | 2026-04-09 00:36:19.934078 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 00:36:19.934084 | orchestrator | Thursday 09 April 2026 00:36:04 +0000 (0:00:00.552) 0:00:24.644 ******** 2026-04-09 00:36:19.934089 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:19.934093 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:19.934100 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:19.934125 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:19.934133 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:19.934140 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:19.934146 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:19.934151 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:19.934157 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:19.934164 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:19.934171 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:19.934177 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:19.934183 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:19.934190 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:19.934195 | orchestrator | 2026-04-09 00:36:19.934200 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-09 00:36:19.934204 | orchestrator | Thursday 09 April 2026 00:36:05 +0000 (0:00:01.152) 0:00:25.796 ******** 2026-04-09 00:36:19.934208 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:19.934214 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:19.934221 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:19.934227 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:19.934232 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:19.934238 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:19.934244 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:19.934250 | orchestrator | 2026-04-09 00:36:19.934256 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-09 00:36:19.934263 | orchestrator | Thursday 09 April 2026 00:36:06 +0000 (0:00:00.628) 0:00:26.425 ******** 2026-04-09 00:36:19.934284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:19.934290 | orchestrator | 2026-04-09 00:36:19.934294 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-09 00:36:19.934298 | orchestrator | Thursday 09 April 2026 00:36:10 +0000 (0:00:04.199) 0:00:30.624 ******** 2026-04-09 00:36:19.934304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934308 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-09 00:36:19.934313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934349 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-09 00:36:19.934358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-09 00:36:19.934362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-09 00:36:19.934366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-09 00:36:19.934414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-09 00:36:19.934418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-09 00:36:19.934422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-09 00:36:19.934426 | orchestrator | 2026-04-09 00:36:19.934433 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-09 00:36:19.934440 | orchestrator | Thursday 09 April 2026 00:36:15 +0000 (0:00:05.184) 0:00:35.808 ******** 2026-04-09 00:36:19.934446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934452 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-09 00:36:19.934458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934464 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-09 00:36:19.934475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:19.934487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-09 00:36:19.934500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:32.109449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:32.109531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-09 00:36:32.109540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-09 00:36:32.109545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-09 00:36:32.109549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-09 00:36:32.109553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-09 00:36:32.109557 | orchestrator | 2026-04-09 00:36:32.109562 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-09 00:36:32.109568 | orchestrator | Thursday 09 April 2026 00:36:20 +0000 (0:00:05.313) 0:00:41.122 ******** 2026-04-09 00:36:32.109586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:32.109592 | orchestrator | 2026-04-09 00:36:32.109598 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 00:36:32.109604 | orchestrator | Thursday 09 April 2026 00:36:22 +0000 (0:00:01.244) 0:00:42.367 ******** 2026-04-09 00:36:32.109610 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:32.109618 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:32.109627 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:32.109633 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:32.109655 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:32.109662 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:32.109668 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:32.109674 | orchestrator | 2026-04-09 00:36:32.109680 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 00:36:32.109686 | orchestrator | Thursday 09 April 2026 00:36:23 +0000 (0:00:01.058) 0:00:43.425 ******** 2026-04-09 00:36:32.109693 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:32.109700 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:32.109707 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:32.109713 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:32.109719 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:32.109727 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:32.109734 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:32.109740 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:32.109746 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:32.109753 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:32.109760 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:32.109766 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:32.109773 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:32.109779 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:32.109785 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:32.109791 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:32.109797 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:32.109803 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:32.109825 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:32.109833 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:32.109840 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:32.109846 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:32.109854 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:32.109861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:32.109867 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:32.109874 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:32.109880 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:32.109887 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:32.109893 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:32.109901 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:32.109907 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:32.109914 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:32.109920 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:32.109927 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:32.109942 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:32.109950 | orchestrator | 2026-04-09 00:36:32.109956 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-09 00:36:32.109963 | orchestrator | Thursday 09 April 2026 00:36:24 +0000 (0:00:00.884) 0:00:44.310 ******** 2026-04-09 00:36:32.109970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:32.109977 | orchestrator | 2026-04-09 00:36:32.109983 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-09 00:36:32.109989 | orchestrator | Thursday 09 April 2026 00:36:25 +0000 (0:00:01.194) 0:00:45.504 ******** 2026-04-09 00:36:32.109995 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:32.110006 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:32.110014 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:32.110091 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:32.110097 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:32.110104 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:32.110111 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:32.110117 | orchestrator | 2026-04-09 00:36:32.110123 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-09 00:36:32.110129 | orchestrator | Thursday 09 April 2026 00:36:25 +0000 (0:00:00.591) 0:00:46.096 ******** 2026-04-09 00:36:32.110136 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:32.110142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:32.110149 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:32.110155 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:32.110161 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:32.110167 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:32.110174 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:32.110181 | orchestrator | 2026-04-09 00:36:32.110188 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-09 00:36:32.110195 | orchestrator | Thursday 09 April 2026 00:36:26 +0000 (0:00:00.732) 0:00:46.829 ******** 2026-04-09 00:36:32.110202 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:32.110209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:32.110216 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:32.110223 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:32.110230 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:32.110236 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:32.110243 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:32.110250 | orchestrator | 2026-04-09 00:36:32.110256 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-09 00:36:32.110263 | orchestrator | Thursday 09 April 2026 00:36:27 +0000 (0:00:00.583) 0:00:47.412 ******** 2026-04-09 00:36:32.110270 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:32.110276 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:32.110283 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:32.110290 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:32.110298 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:32.110305 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:32.110312 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:32.110318 | orchestrator | 2026-04-09 00:36:32.110324 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-09 00:36:32.110331 | orchestrator | Thursday 09 April 2026 00:36:29 +0000 (0:00:01.779) 0:00:49.191 ******** 2026-04-09 00:36:32.110338 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:32.110345 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:32.110351 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:32.110380 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:32.110388 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:32.110395 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:32.110409 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:32.110416 | orchestrator | 2026-04-09 00:36:32.110423 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-09 00:36:32.110430 | orchestrator | Thursday 09 April 2026 00:36:30 +0000 (0:00:01.135) 0:00:50.326 ******** 2026-04-09 00:36:32.110437 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:32.110444 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:32.110451 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:32.110457 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:32.110463 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:32.110469 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:32.110476 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:32.110482 | orchestrator | 2026-04-09 00:36:32.110496 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-09 00:36:33.403132 | orchestrator | Thursday 09 April 2026 00:36:32 +0000 (0:00:01.938) 0:00:52.265 ******** 2026-04-09 00:36:33.403224 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:33.403238 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:33.403248 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:33.403257 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:33.403266 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:33.403274 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:33.403283 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:33.403292 | orchestrator | 2026-04-09 00:36:33.403302 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-09 00:36:33.403311 | orchestrator | Thursday 09 April 2026 00:36:32 +0000 (0:00:00.651) 0:00:52.916 ******** 2026-04-09 00:36:33.403320 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:33.403329 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:33.403338 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:33.403347 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:33.403394 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:33.403405 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:33.403413 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:33.403422 | orchestrator | 2026-04-09 00:36:33.403431 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:36:33.403441 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 00:36:33.403451 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:33.403460 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:33.403469 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:33.403477 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:33.403503 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:33.403517 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:33.403526 | orchestrator | 2026-04-09 00:36:33.403535 | orchestrator | 2026-04-09 00:36:33.403544 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:36:33.403553 | orchestrator | Thursday 09 April 2026 00:36:33 +0000 (0:00:00.453) 0:00:53.370 ******** 2026-04-09 00:36:33.403561 | orchestrator | =============================================================================== 2026-04-09 00:36:33.403570 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.31s 2026-04-09 00:36:33.403598 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.18s 2026-04-09 00:36:33.403607 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.20s 2026-04-09 00:36:33.403616 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.05s 2026-04-09 00:36:33.403625 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.72s 2026-04-09 00:36:33.403633 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.35s 2026-04-09 00:36:33.403642 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.94s 2026-04-09 00:36:33.403650 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.78s 2026-04-09 00:36:33.403659 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.64s 2026-04-09 00:36:33.403668 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.56s 2026-04-09 00:36:33.403679 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.50s 2026-04-09 00:36:33.403689 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2026-04-09 00:36:33.403699 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.24s 2026-04-09 00:36:33.403709 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.19s 2026-04-09 00:36:33.403720 | orchestrator | osism.commons.network : Create required directories --------------------- 1.18s 2026-04-09 00:36:33.403730 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2026-04-09 00:36:33.403740 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.14s 2026-04-09 00:36:33.403750 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.12s 2026-04-09 00:36:33.403761 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.06s 2026-04-09 00:36:33.403771 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.06s 2026-04-09 00:36:33.512515 | orchestrator | + osism apply wireguard 2026-04-09 00:36:44.713834 | orchestrator | 2026-04-09 00:36:44 | INFO  | Prepare task for execution of wireguard. 2026-04-09 00:36:44.788124 | orchestrator | 2026-04-09 00:36:44 | INFO  | Task bf5314dc-1dbd-473f-8e7d-695a94f5aad1 (wireguard) was prepared for execution. 2026-04-09 00:36:44.788219 | orchestrator | 2026-04-09 00:36:44 | INFO  | It takes a moment until task bf5314dc-1dbd-473f-8e7d-695a94f5aad1 (wireguard) has been started and output is visible here. 2026-04-09 00:37:01.329059 | orchestrator | 2026-04-09 00:37:01.329174 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-09 00:37:01.329191 | orchestrator | 2026-04-09 00:37:01.329205 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-09 00:37:01.329217 | orchestrator | Thursday 09 April 2026 00:36:47 +0000 (0:00:00.207) 0:00:00.207 ******** 2026-04-09 00:37:01.329229 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:01.329242 | orchestrator | 2026-04-09 00:37:01.329253 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-09 00:37:01.329265 | orchestrator | Thursday 09 April 2026 00:36:49 +0000 (0:00:01.416) 0:00:01.624 ******** 2026-04-09 00:37:01.329276 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:01.329287 | orchestrator | 2026-04-09 00:37:01.329298 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-09 00:37:01.329310 | orchestrator | Thursday 09 April 2026 00:36:54 +0000 (0:00:05.136) 0:00:06.760 ******** 2026-04-09 00:37:01.329321 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:01.329404 | orchestrator | 2026-04-09 00:37:01.329416 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-09 00:37:01.329427 | orchestrator | Thursday 09 April 2026 00:36:54 +0000 (0:00:00.475) 0:00:07.236 ******** 2026-04-09 00:37:01.329439 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:01.329475 | orchestrator | 2026-04-09 00:37:01.329487 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-09 00:37:01.329499 | orchestrator | Thursday 09 April 2026 00:36:55 +0000 (0:00:00.362) 0:00:07.598 ******** 2026-04-09 00:37:01.329510 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:01.329521 | orchestrator | 2026-04-09 00:37:01.329532 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-09 00:37:01.329542 | orchestrator | Thursday 09 April 2026 00:36:55 +0000 (0:00:00.449) 0:00:08.047 ******** 2026-04-09 00:37:01.329553 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:01.329564 | orchestrator | 2026-04-09 00:37:01.329575 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-09 00:37:01.329586 | orchestrator | Thursday 09 April 2026 00:36:55 +0000 (0:00:00.370) 0:00:08.418 ******** 2026-04-09 00:37:01.329597 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:01.329611 | orchestrator | 2026-04-09 00:37:01.329630 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-09 00:37:01.329658 | orchestrator | Thursday 09 April 2026 00:36:56 +0000 (0:00:00.366) 0:00:08.784 ******** 2026-04-09 00:37:01.329679 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:01.329696 | orchestrator | 2026-04-09 00:37:01.329712 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-09 00:37:01.329729 | orchestrator | Thursday 09 April 2026 00:36:57 +0000 (0:00:01.063) 0:00:09.848 ******** 2026-04-09 00:37:01.329748 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:37:01.329767 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:01.329778 | orchestrator | 2026-04-09 00:37:01.329789 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-09 00:37:01.329800 | orchestrator | Thursday 09 April 2026 00:36:58 +0000 (0:00:00.804) 0:00:10.652 ******** 2026-04-09 00:37:01.329811 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:01.329822 | orchestrator | 2026-04-09 00:37:01.329833 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-09 00:37:01.329844 | orchestrator | Thursday 09 April 2026 00:37:00 +0000 (0:00:01.979) 0:00:12.632 ******** 2026-04-09 00:37:01.329854 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:01.329865 | orchestrator | 2026-04-09 00:37:01.329893 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:37:01.329905 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:37:01.329918 | orchestrator | 2026-04-09 00:37:01.329928 | orchestrator | 2026-04-09 00:37:01.329939 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:37:01.329950 | orchestrator | Thursday 09 April 2026 00:37:01 +0000 (0:00:00.970) 0:00:13.602 ******** 2026-04-09 00:37:01.329961 | orchestrator | =============================================================================== 2026-04-09 00:37:01.329972 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.14s 2026-04-09 00:37:01.329982 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.98s 2026-04-09 00:37:01.329993 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.42s 2026-04-09 00:37:01.330004 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.06s 2026-04-09 00:37:01.330074 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2026-04-09 00:37:01.330087 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.80s 2026-04-09 00:37:01.330097 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.48s 2026-04-09 00:37:01.330108 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.45s 2026-04-09 00:37:01.330119 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.37s 2026-04-09 00:37:01.330130 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2026-04-09 00:37:01.330152 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.36s 2026-04-09 00:37:01.491703 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-09 00:37:01.516448 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-09 00:37:01.516538 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-09 00:37:01.591315 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 13 100 13 0 0 174 0 --:--:-- --:--:-- --:--:-- 175 2026-04-09 00:37:01.600730 | orchestrator | + osism apply --environment custom workarounds 2026-04-09 00:37:02.838605 | orchestrator | 2026-04-09 00:37:02 | INFO  | Trying to run play workarounds in environment custom 2026-04-09 00:37:12.886502 | orchestrator | 2026-04-09 00:37:12 | INFO  | Prepare task for execution of workarounds. 2026-04-09 00:37:12.976984 | orchestrator | 2026-04-09 00:37:12 | INFO  | Task f0be8e4b-f79a-4b60-afbb-fdc97d94749f (workarounds) was prepared for execution. 2026-04-09 00:37:12.977057 | orchestrator | 2026-04-09 00:37:12 | INFO  | It takes a moment until task f0be8e4b-f79a-4b60-afbb-fdc97d94749f (workarounds) has been started and output is visible here. 2026-04-09 00:37:36.991883 | orchestrator | 2026-04-09 00:37:36.992003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:37:36.992020 | orchestrator | 2026-04-09 00:37:36.992032 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-09 00:37:36.992044 | orchestrator | Thursday 09 April 2026 00:37:15 +0000 (0:00:00.163) 0:00:00.164 ******** 2026-04-09 00:37:36.992056 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-09 00:37:36.992067 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-09 00:37:36.992079 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-09 00:37:36.992090 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-09 00:37:36.992101 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-09 00:37:36.992112 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-09 00:37:36.992123 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-09 00:37:36.992135 | orchestrator | 2026-04-09 00:37:36.992147 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-09 00:37:36.992158 | orchestrator | 2026-04-09 00:37:36.992169 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 00:37:36.992180 | orchestrator | Thursday 09 April 2026 00:37:16 +0000 (0:00:00.584) 0:00:00.748 ******** 2026-04-09 00:37:36.992191 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:36.992204 | orchestrator | 2026-04-09 00:37:36.992232 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-09 00:37:36.992243 | orchestrator | 2026-04-09 00:37:36.992254 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 00:37:36.992265 | orchestrator | Thursday 09 April 2026 00:37:18 +0000 (0:00:02.333) 0:00:03.082 ******** 2026-04-09 00:37:36.992277 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:36.992355 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:36.992369 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:36.992381 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:36.992392 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:36.992403 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:36.992414 | orchestrator | 2026-04-09 00:37:36.992426 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-09 00:37:36.992439 | orchestrator | 2026-04-09 00:37:36.992452 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-09 00:37:36.992464 | orchestrator | Thursday 09 April 2026 00:37:21 +0000 (0:00:02.369) 0:00:05.452 ******** 2026-04-09 00:37:36.992478 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:36.992515 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:36.992528 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:36.992540 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:36.992553 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:36.992566 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:36.992579 | orchestrator | 2026-04-09 00:37:36.992591 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-09 00:37:36.992604 | orchestrator | Thursday 09 April 2026 00:37:22 +0000 (0:00:01.291) 0:00:06.743 ******** 2026-04-09 00:37:36.992617 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:36.992631 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:36.992644 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:36.992657 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:36.992670 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:36.992683 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:36.992696 | orchestrator | 2026-04-09 00:37:36.992708 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-09 00:37:36.992721 | orchestrator | Thursday 09 April 2026 00:37:26 +0000 (0:00:04.063) 0:00:10.807 ******** 2026-04-09 00:37:36.992733 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:36.992746 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:36.992758 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:36.992771 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:36.992784 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:36.992795 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:36.992806 | orchestrator | 2026-04-09 00:37:36.992817 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-09 00:37:36.992828 | orchestrator | 2026-04-09 00:37:36.992839 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-09 00:37:36.992850 | orchestrator | Thursday 09 April 2026 00:37:27 +0000 (0:00:00.536) 0:00:11.344 ******** 2026-04-09 00:37:36.992861 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:36.992872 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:36.992883 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:36.992894 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:36.992905 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:36.992916 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:36.992927 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:36.992938 | orchestrator | 2026-04-09 00:37:36.992949 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-09 00:37:36.992960 | orchestrator | Thursday 09 April 2026 00:37:28 +0000 (0:00:01.748) 0:00:13.092 ******** 2026-04-09 00:37:36.992971 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:36.992981 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:36.992992 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:36.993003 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:36.993014 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:36.993026 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:36.993054 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:36.993066 | orchestrator | 2026-04-09 00:37:36.993077 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-09 00:37:36.993088 | orchestrator | Thursday 09 April 2026 00:37:30 +0000 (0:00:01.451) 0:00:14.544 ******** 2026-04-09 00:37:36.993099 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:36.993110 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:36.993121 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:36.993140 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:36.993168 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:36.993180 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:36.993202 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:36.993213 | orchestrator | 2026-04-09 00:37:36.993225 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-09 00:37:36.993235 | orchestrator | Thursday 09 April 2026 00:37:31 +0000 (0:00:01.769) 0:00:16.313 ******** 2026-04-09 00:37:36.993246 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:36.993257 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:36.993268 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:36.993279 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:36.993329 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:36.993341 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:36.993352 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:36.993363 | orchestrator | 2026-04-09 00:37:36.993374 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-09 00:37:36.993385 | orchestrator | Thursday 09 April 2026 00:37:33 +0000 (0:00:01.587) 0:00:17.902 ******** 2026-04-09 00:37:36.993396 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:37:36.993413 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:36.993425 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:36.993436 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:36.993447 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:36.993457 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:36.993468 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:36.993479 | orchestrator | 2026-04-09 00:37:36.993490 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-09 00:37:36.993501 | orchestrator | 2026-04-09 00:37:36.993512 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-09 00:37:36.993523 | orchestrator | Thursday 09 April 2026 00:37:34 +0000 (0:00:00.748) 0:00:18.650 ******** 2026-04-09 00:37:36.993534 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:36.993544 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:36.993556 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:36.993566 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:36.993577 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:36.993588 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:36.993599 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:36.993610 | orchestrator | 2026-04-09 00:37:36.993621 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:37:36.993633 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:36.993646 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:36.993657 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:36.993668 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:36.993679 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:36.993690 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:36.993701 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:36.993712 | orchestrator | 2026-04-09 00:37:36.993723 | orchestrator | 2026-04-09 00:37:36.993734 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:37:36.993761 | orchestrator | Thursday 09 April 2026 00:37:36 +0000 (0:00:02.636) 0:00:21.287 ******** 2026-04-09 00:37:36.993782 | orchestrator | =============================================================================== 2026-04-09 00:37:36.993802 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.06s 2026-04-09 00:37:36.993820 | orchestrator | Install python3-docker -------------------------------------------------- 2.64s 2026-04-09 00:37:36.993838 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2026-04-09 00:37:36.993857 | orchestrator | Apply netplan configuration --------------------------------------------- 2.33s 2026-04-09 00:37:36.993875 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.77s 2026-04-09 00:37:36.993893 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.75s 2026-04-09 00:37:36.993912 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.59s 2026-04-09 00:37:36.993931 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.45s 2026-04-09 00:37:36.993951 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.29s 2026-04-09 00:37:36.993970 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.75s 2026-04-09 00:37:36.993989 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.58s 2026-04-09 00:37:36.994069 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.54s 2026-04-09 00:37:37.482507 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-09 00:37:48.847925 | orchestrator | 2026-04-09 00:37:48 | INFO  | Prepare task for execution of reboot. 2026-04-09 00:37:48.926943 | orchestrator | 2026-04-09 00:37:48 | INFO  | Task 600afb2a-2b91-4664-8ab2-25ca770980d2 (reboot) was prepared for execution. 2026-04-09 00:37:48.927033 | orchestrator | 2026-04-09 00:37:48 | INFO  | It takes a moment until task 600afb2a-2b91-4664-8ab2-25ca770980d2 (reboot) has been started and output is visible here. 2026-04-09 00:37:59.786000 | orchestrator | 2026-04-09 00:37:59.786213 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:59.786233 | orchestrator | 2026-04-09 00:37:59.786244 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:59.786255 | orchestrator | Thursday 09 April 2026 00:37:51 +0000 (0:00:00.218) 0:00:00.218 ******** 2026-04-09 00:37:59.786293 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:59.786307 | orchestrator | 2026-04-09 00:37:59.786317 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:59.786327 | orchestrator | Thursday 09 April 2026 00:37:51 +0000 (0:00:00.119) 0:00:00.338 ******** 2026-04-09 00:37:59.786337 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:59.786346 | orchestrator | 2026-04-09 00:37:59.786370 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:59.786380 | orchestrator | Thursday 09 April 2026 00:37:53 +0000 (0:00:01.220) 0:00:01.558 ******** 2026-04-09 00:37:59.786390 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:59.786400 | orchestrator | 2026-04-09 00:37:59.786410 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:59.786419 | orchestrator | 2026-04-09 00:37:59.786429 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:59.786438 | orchestrator | Thursday 09 April 2026 00:37:53 +0000 (0:00:00.094) 0:00:01.653 ******** 2026-04-09 00:37:59.786448 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:59.786457 | orchestrator | 2026-04-09 00:37:59.786467 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:59.786476 | orchestrator | Thursday 09 April 2026 00:37:53 +0000 (0:00:00.086) 0:00:01.740 ******** 2026-04-09 00:37:59.786486 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:59.786495 | orchestrator | 2026-04-09 00:37:59.786505 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:59.786536 | orchestrator | Thursday 09 April 2026 00:37:54 +0000 (0:00:00.973) 0:00:02.714 ******** 2026-04-09 00:37:59.786549 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:59.786561 | orchestrator | 2026-04-09 00:37:59.786573 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:59.786584 | orchestrator | 2026-04-09 00:37:59.786595 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:59.786606 | orchestrator | Thursday 09 April 2026 00:37:54 +0000 (0:00:00.110) 0:00:02.825 ******** 2026-04-09 00:37:59.786618 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:59.786630 | orchestrator | 2026-04-09 00:37:59.786641 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:59.786653 | orchestrator | Thursday 09 April 2026 00:37:54 +0000 (0:00:00.094) 0:00:02.919 ******** 2026-04-09 00:37:59.786664 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:59.786675 | orchestrator | 2026-04-09 00:37:59.786687 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:59.786698 | orchestrator | Thursday 09 April 2026 00:37:55 +0000 (0:00:01.039) 0:00:03.959 ******** 2026-04-09 00:37:59.786711 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:59.786728 | orchestrator | 2026-04-09 00:37:59.786743 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:59.786755 | orchestrator | 2026-04-09 00:37:59.786766 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:59.786777 | orchestrator | Thursday 09 April 2026 00:37:55 +0000 (0:00:00.130) 0:00:04.090 ******** 2026-04-09 00:37:59.786792 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:59.786809 | orchestrator | 2026-04-09 00:37:59.786825 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:59.786841 | orchestrator | Thursday 09 April 2026 00:37:55 +0000 (0:00:00.165) 0:00:04.255 ******** 2026-04-09 00:37:59.786857 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:59.786875 | orchestrator | 2026-04-09 00:37:59.786892 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:59.786911 | orchestrator | Thursday 09 April 2026 00:37:56 +0000 (0:00:01.076) 0:00:05.332 ******** 2026-04-09 00:37:59.786930 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:59.786950 | orchestrator | 2026-04-09 00:37:59.786969 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:59.786981 | orchestrator | 2026-04-09 00:37:59.786992 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:59.787005 | orchestrator | Thursday 09 April 2026 00:37:56 +0000 (0:00:00.117) 0:00:05.449 ******** 2026-04-09 00:37:59.787024 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:59.787067 | orchestrator | 2026-04-09 00:37:59.787085 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:59.787103 | orchestrator | Thursday 09 April 2026 00:37:57 +0000 (0:00:00.251) 0:00:05.700 ******** 2026-04-09 00:37:59.787120 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:59.787136 | orchestrator | 2026-04-09 00:37:59.787153 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:59.787172 | orchestrator | Thursday 09 April 2026 00:37:58 +0000 (0:00:01.042) 0:00:06.743 ******** 2026-04-09 00:37:59.787190 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:59.787206 | orchestrator | 2026-04-09 00:37:59.787221 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:59.787238 | orchestrator | 2026-04-09 00:37:59.787257 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:59.787302 | orchestrator | Thursday 09 April 2026 00:37:58 +0000 (0:00:00.095) 0:00:06.839 ******** 2026-04-09 00:37:59.787340 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:59.787357 | orchestrator | 2026-04-09 00:37:59.787374 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:59.787408 | orchestrator | Thursday 09 April 2026 00:37:58 +0000 (0:00:00.093) 0:00:06.933 ******** 2026-04-09 00:37:59.787423 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:59.787441 | orchestrator | 2026-04-09 00:37:59.787459 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:59.787476 | orchestrator | Thursday 09 April 2026 00:37:59 +0000 (0:00:01.041) 0:00:07.974 ******** 2026-04-09 00:37:59.787522 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:59.787543 | orchestrator | 2026-04-09 00:37:59.787562 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:37:59.787581 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:59.787602 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:59.787631 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:59.787648 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:59.787666 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:59.787684 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:59.787701 | orchestrator | 2026-04-09 00:37:59.787719 | orchestrator | 2026-04-09 00:37:59.787737 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:37:59.787752 | orchestrator | Thursday 09 April 2026 00:37:59 +0000 (0:00:00.041) 0:00:08.016 ******** 2026-04-09 00:37:59.787771 | orchestrator | =============================================================================== 2026-04-09 00:37:59.787788 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.40s 2026-04-09 00:37:59.787805 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2026-04-09 00:37:59.787823 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-04-09 00:37:59.947052 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-09 00:38:11.221441 | orchestrator | 2026-04-09 00:38:11 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-09 00:38:11.289341 | orchestrator | 2026-04-09 00:38:11 | INFO  | Task e61b723c-fd12-42a0-bdda-fb0fc9a3ee99 (wait-for-connection) was prepared for execution. 2026-04-09 00:38:11.289408 | orchestrator | 2026-04-09 00:38:11 | INFO  | It takes a moment until task e61b723c-fd12-42a0-bdda-fb0fc9a3ee99 (wait-for-connection) has been started and output is visible here. 2026-04-09 00:38:26.198400 | orchestrator | 2026-04-09 00:38:26.198522 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-09 00:38:26.198539 | orchestrator | 2026-04-09 00:38:26.198551 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-09 00:38:26.198563 | orchestrator | Thursday 09 April 2026 00:38:14 +0000 (0:00:00.315) 0:00:00.315 ******** 2026-04-09 00:38:26.198575 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:38:26.198589 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:38:26.198600 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:38:26.198613 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:38:26.198624 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:38:26.198635 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:38:26.198647 | orchestrator | 2026-04-09 00:38:26.198658 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:38:26.198669 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:26.198720 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:26.198733 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:26.198753 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:26.198770 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:26.198787 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:26.198805 | orchestrator | 2026-04-09 00:38:26.198822 | orchestrator | 2026-04-09 00:38:26.198839 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:38:26.198858 | orchestrator | Thursday 09 April 2026 00:38:26 +0000 (0:00:11.565) 0:00:11.881 ******** 2026-04-09 00:38:26.198876 | orchestrator | =============================================================================== 2026-04-09 00:38:26.198895 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2026-04-09 00:38:26.306938 | orchestrator | + osism apply hddtemp 2026-04-09 00:38:37.405761 | orchestrator | 2026-04-09 00:38:37 | INFO  | Prepare task for execution of hddtemp. 2026-04-09 00:38:37.500611 | orchestrator | 2026-04-09 00:38:37 | INFO  | Task 7fe881f9-ca53-48ce-a60b-c7726a473c33 (hddtemp) was prepared for execution. 2026-04-09 00:38:37.500997 | orchestrator | 2026-04-09 00:38:37 | INFO  | It takes a moment until task 7fe881f9-ca53-48ce-a60b-c7726a473c33 (hddtemp) has been started and output is visible here. 2026-04-09 00:39:04.038147 | orchestrator | 2026-04-09 00:39:04.038325 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-09 00:39:04.038354 | orchestrator | 2026-04-09 00:39:04.038376 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-09 00:39:04.038390 | orchestrator | Thursday 09 April 2026 00:38:40 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-04-09 00:39:04.038402 | orchestrator | ok: [testbed-manager] 2026-04-09 00:39:04.038415 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:39:04.038426 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:39:04.038438 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:39:04.038449 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:39:04.038459 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:39:04.038486 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:39:04.038497 | orchestrator | 2026-04-09 00:39:04.038509 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-09 00:39:04.038520 | orchestrator | Thursday 09 April 2026 00:38:41 +0000 (0:00:00.533) 0:00:00.815 ******** 2026-04-09 00:39:04.038533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:39:04.038546 | orchestrator | 2026-04-09 00:39:04.038557 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-09 00:39:04.038569 | orchestrator | Thursday 09 April 2026 00:38:42 +0000 (0:00:00.834) 0:00:01.650 ******** 2026-04-09 00:39:04.038579 | orchestrator | ok: [testbed-manager] 2026-04-09 00:39:04.038591 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:39:04.038601 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:39:04.038612 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:39:04.038623 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:39:04.038634 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:39:04.038648 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:39:04.038662 | orchestrator | 2026-04-09 00:39:04.038676 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-09 00:39:04.038716 | orchestrator | Thursday 09 April 2026 00:38:44 +0000 (0:00:02.333) 0:00:03.983 ******** 2026-04-09 00:39:04.038730 | orchestrator | changed: [testbed-manager] 2026-04-09 00:39:04.038745 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:39:04.038758 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:39:04.038772 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:39:04.038786 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:39:04.038799 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:39:04.038812 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:39:04.038825 | orchestrator | 2026-04-09 00:39:04.038839 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-09 00:39:04.038853 | orchestrator | Thursday 09 April 2026 00:38:45 +0000 (0:00:00.868) 0:00:04.852 ******** 2026-04-09 00:39:04.038867 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:39:04.038880 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:39:04.038892 | orchestrator | ok: [testbed-manager] 2026-04-09 00:39:04.038906 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:39:04.038926 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:39:04.038946 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:39:04.038964 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:39:04.038979 | orchestrator | 2026-04-09 00:39:04.038993 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-09 00:39:04.039007 | orchestrator | Thursday 09 April 2026 00:38:46 +0000 (0:00:01.191) 0:00:06.043 ******** 2026-04-09 00:39:04.039027 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:39:04.039045 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:39:04.039056 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:39:04.039067 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:39:04.039078 | orchestrator | changed: [testbed-manager] 2026-04-09 00:39:04.039089 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:39:04.039108 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:39:04.039128 | orchestrator | 2026-04-09 00:39:04.039140 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-09 00:39:04.039151 | orchestrator | Thursday 09 April 2026 00:38:47 +0000 (0:00:00.591) 0:00:06.635 ******** 2026-04-09 00:39:04.039162 | orchestrator | changed: [testbed-manager] 2026-04-09 00:39:04.039173 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:39:04.039187 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:39:04.039207 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:39:04.039223 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:39:04.039264 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:39:04.039284 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:39:04.039296 | orchestrator | 2026-04-09 00:39:04.039307 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-09 00:39:04.039318 | orchestrator | Thursday 09 April 2026 00:39:01 +0000 (0:00:14.057) 0:00:20.693 ******** 2026-04-09 00:39:04.039330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:39:04.039350 | orchestrator | 2026-04-09 00:39:04.039369 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-09 00:39:04.039382 | orchestrator | Thursday 09 April 2026 00:39:02 +0000 (0:00:01.018) 0:00:21.712 ******** 2026-04-09 00:39:04.039392 | orchestrator | changed: [testbed-manager] 2026-04-09 00:39:04.039403 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:39:04.039414 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:39:04.039430 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:39:04.039450 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:39:04.039465 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:39:04.039476 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:39:04.039487 | orchestrator | 2026-04-09 00:39:04.039498 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:39:04.039522 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:39:04.039562 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:39:04.039576 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:39:04.039587 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:39:04.039607 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:39:04.039627 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:39:04.039644 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:39:04.039656 | orchestrator | 2026-04-09 00:39:04.039667 | orchestrator | 2026-04-09 00:39:04.039678 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:39:04.039691 | orchestrator | Thursday 09 April 2026 00:39:03 +0000 (0:00:01.751) 0:00:23.463 ******** 2026-04-09 00:39:04.039711 | orchestrator | =============================================================================== 2026-04-09 00:39:04.039729 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.06s 2026-04-09 00:39:04.039740 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.33s 2026-04-09 00:39:04.039751 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2026-04-09 00:39:04.039762 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2026-04-09 00:39:04.039773 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.02s 2026-04-09 00:39:04.039792 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.87s 2026-04-09 00:39:04.039812 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.83s 2026-04-09 00:39:04.039824 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.59s 2026-04-09 00:39:04.039835 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.53s 2026-04-09 00:39:04.150643 | orchestrator | ++ semver latest 7.1.1 2026-04-09 00:39:04.198646 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:39:04.198746 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:39:04.198764 | orchestrator | + sudo systemctl restart manager.service 2026-04-09 00:39:21.474428 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 00:39:21.474538 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 00:39:21.474556 | orchestrator | + local max_attempts=60 2026-04-09 00:39:21.474568 | orchestrator | + local name=ceph-ansible 2026-04-09 00:39:21.474578 | orchestrator | + local attempt_num=1 2026-04-09 00:39:21.474589 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:21.501883 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:21.501967 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:21.501981 | orchestrator | + sleep 5 2026-04-09 00:39:26.506727 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:26.539606 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:26.539718 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:26.539740 | orchestrator | + sleep 5 2026-04-09 00:39:31.542428 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:31.574151 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:31.574299 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:31.574321 | orchestrator | + sleep 5 2026-04-09 00:39:36.578397 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:36.617533 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:36.617625 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:36.617640 | orchestrator | + sleep 5 2026-04-09 00:39:41.621323 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:41.654782 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:41.654876 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:41.654893 | orchestrator | + sleep 5 2026-04-09 00:39:46.660232 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:46.703395 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:46.703495 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:46.703511 | orchestrator | + sleep 5 2026-04-09 00:39:51.707997 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:51.738085 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:51.738180 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:51.738195 | orchestrator | + sleep 5 2026-04-09 00:39:56.743320 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:56.790440 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:56.790540 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:56.790557 | orchestrator | + sleep 5 2026-04-09 00:40:01.792813 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:01.828133 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:01.828231 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:40:01.828248 | orchestrator | + sleep 5 2026-04-09 00:40:06.831717 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:06.876385 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:06.876480 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:40:06.876495 | orchestrator | + sleep 5 2026-04-09 00:40:11.879619 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:11.915383 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:11.915466 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:40:11.915482 | orchestrator | + sleep 5 2026-04-09 00:40:16.919789 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:16.956625 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:16.956721 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:40:16.956738 | orchestrator | + sleep 5 2026-04-09 00:40:21.961695 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:21.997444 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:21.997542 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:40:21.997558 | orchestrator | + sleep 5 2026-04-09 00:40:27.001425 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:27.035803 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:27.035910 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 00:40:27.035928 | orchestrator | + local max_attempts=60 2026-04-09 00:40:27.035942 | orchestrator | + local name=kolla-ansible 2026-04-09 00:40:27.035954 | orchestrator | + local attempt_num=1 2026-04-09 00:40:27.036538 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 00:40:27.067118 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:27.067224 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 00:40:27.067246 | orchestrator | + local max_attempts=60 2026-04-09 00:40:27.067264 | orchestrator | + local name=osism-ansible 2026-04-09 00:40:27.067309 | orchestrator | + local attempt_num=1 2026-04-09 00:40:27.068122 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 00:40:27.100158 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:27.100260 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 00:40:27.100340 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 00:40:27.230475 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-09 00:40:27.373877 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-09 00:40:27.519861 | orchestrator | ARA in osism-ansible already disabled. 2026-04-09 00:40:27.648618 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-09 00:40:27.648739 | orchestrator | + osism apply gather-facts 2026-04-09 00:40:38.933718 | orchestrator | 2026-04-09 00:40:38 | INFO  | Prepare task for execution of gather-facts. 2026-04-09 00:40:39.010762 | orchestrator | 2026-04-09 00:40:39 | INFO  | Task 6077c6e9-84c6-45b8-8ebb-336c59d14b74 (gather-facts) was prepared for execution. 2026-04-09 00:40:39.010836 | orchestrator | 2026-04-09 00:40:39 | INFO  | It takes a moment until task 6077c6e9-84c6-45b8-8ebb-336c59d14b74 (gather-facts) has been started and output is visible here. 2026-04-09 00:40:48.416085 | orchestrator | 2026-04-09 00:40:48.416192 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:40:48.416208 | orchestrator | 2026-04-09 00:40:48.416220 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:40:48.416232 | orchestrator | Thursday 09 April 2026 00:40:41 +0000 (0:00:00.233) 0:00:00.233 ******** 2026-04-09 00:40:48.416244 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:40:48.416256 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:40:48.416268 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:40:48.416279 | orchestrator | ok: [testbed-manager] 2026-04-09 00:40:48.416339 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:40:48.416351 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:40:48.416362 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:40:48.416374 | orchestrator | 2026-04-09 00:40:48.416386 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:40:48.416397 | orchestrator | 2026-04-09 00:40:48.416408 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:40:48.416419 | orchestrator | Thursday 09 April 2026 00:40:47 +0000 (0:00:05.738) 0:00:05.972 ******** 2026-04-09 00:40:48.416431 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:40:48.416443 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:40:48.416454 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:40:48.416465 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:40:48.416476 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:48.416488 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:48.416499 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:48.416510 | orchestrator | 2026-04-09 00:40:48.416521 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:40:48.416532 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:48.416545 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:48.416556 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:48.416587 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:48.416599 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:48.416610 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:48.416621 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:48.416636 | orchestrator | 2026-04-09 00:40:48.416650 | orchestrator | 2026-04-09 00:40:48.416663 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:40:48.416676 | orchestrator | Thursday 09 April 2026 00:40:48 +0000 (0:00:00.526) 0:00:06.499 ******** 2026-04-09 00:40:48.416689 | orchestrator | =============================================================================== 2026-04-09 00:40:48.416702 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.74s 2026-04-09 00:40:48.416739 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-04-09 00:40:48.535921 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-09 00:40:48.545888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-09 00:40:48.565153 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-09 00:40:48.573069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-09 00:40:48.589966 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-09 00:40:48.599469 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-09 00:40:48.613973 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-09 00:40:48.624648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-09 00:40:48.633774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-09 00:40:48.643238 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-09 00:40:48.652423 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-09 00:40:48.662327 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-09 00:40:48.679075 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-09 00:40:48.693733 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-09 00:40:48.711110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-09 00:40:48.724220 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-09 00:40:48.734913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-09 00:40:48.752570 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-09 00:40:48.765555 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-09 00:40:48.775628 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-09 00:40:48.783486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-09 00:40:48.790539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-09 00:40:48.808431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-09 00:40:48.818326 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-09 00:40:48.933436 | orchestrator | ok: Runtime: 0:23:14.408019 2026-04-09 00:40:49.028581 | 2026-04-09 00:40:49.028729 | TASK [Deploy services] 2026-04-09 00:40:49.572869 | orchestrator | skipping: Conditional result was False 2026-04-09 00:40:49.591058 | 2026-04-09 00:40:49.591216 | TASK [Deploy in a nutshell] 2026-04-09 00:40:50.300795 | orchestrator | + set -e 2026-04-09 00:40:50.301123 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:40:50.301165 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:40:50.301189 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:40:50.301203 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:40:50.301221 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:40:50.301243 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:40:50.301358 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:40:50.301403 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:40:50.301435 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 00:40:50.301451 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 00:40:50.301463 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:40:50.301482 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:40:50.301493 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 00:40:50.301513 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 00:40:50.301525 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-09 00:40:50.301539 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-09 00:40:50.301550 | orchestrator | ++ export ARA=false 2026-04-09 00:40:50.301562 | orchestrator | ++ ARA=false 2026-04-09 00:40:50.301573 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:40:50.301585 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:40:50.301596 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:40:50.301607 | orchestrator | ++ TEMPEST=true 2026-04-09 00:40:50.301617 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:40:50.301628 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:40:50.301639 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 00:40:50.301650 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 00:40:50.301661 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:40:50.301672 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:40:50.301683 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:40:50.301694 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:40:50.301705 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:40:50.301715 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:40:50.301726 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:40:50.301737 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:40:50.301748 | orchestrator | + echo 2026-04-09 00:40:50.301760 | orchestrator | 2026-04-09 00:40:50.301771 | orchestrator | # PULL IMAGES 2026-04-09 00:40:50.301782 | orchestrator | 2026-04-09 00:40:50.301793 | orchestrator | + echo '# PULL IMAGES' 2026-04-09 00:40:50.301804 | orchestrator | + echo 2026-04-09 00:40:50.302361 | orchestrator | ++ semver latest 7.0.0 2026-04-09 00:40:50.357829 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:40:50.357925 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:40:50.357959 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-09 00:40:51.446333 | orchestrator | 2026-04-09 00:40:51 | INFO  | Trying to run play pull-images in environment custom 2026-04-09 00:41:01.664812 | orchestrator | 2026-04-09 00:41:01 | INFO  | Prepare task for execution of pull-images. 2026-04-09 00:41:01.737674 | orchestrator | 2026-04-09 00:41:01 | INFO  | Task 463458c8-95c4-4104-afba-b602aff1e644 (pull-images) was prepared for execution. 2026-04-09 00:41:01.737767 | orchestrator | 2026-04-09 00:41:01 | INFO  | Task 463458c8-95c4-4104-afba-b602aff1e644 is running in background. No more output. Check ARA for logs. 2026-04-09 00:41:03.033039 | orchestrator | 2026-04-09 00:41:03 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-09 00:41:13.060115 | orchestrator | 2026-04-09 00:41:13 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-09 00:41:13.134835 | orchestrator | 2026-04-09 00:41:13 | INFO  | Task 39730479-bb56-480d-b313-6cb8c0be3547 (wipe-partitions) was prepared for execution. 2026-04-09 00:41:13.134933 | orchestrator | 2026-04-09 00:41:13 | INFO  | It takes a moment until task 39730479-bb56-480d-b313-6cb8c0be3547 (wipe-partitions) has been started and output is visible here. 2026-04-09 00:41:25.014107 | orchestrator | 2026-04-09 00:41:25.014197 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-09 00:41:25.014208 | orchestrator | 2026-04-09 00:41:25.014218 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-09 00:41:25.014238 | orchestrator | Thursday 09 April 2026 00:41:16 +0000 (0:00:00.162) 0:00:00.162 ******** 2026-04-09 00:41:25.014282 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:41:25.014294 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:41:25.014303 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:41:25.014380 | orchestrator | 2026-04-09 00:41:25.014391 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-09 00:41:25.014399 | orchestrator | Thursday 09 April 2026 00:41:17 +0000 (0:00:01.043) 0:00:01.205 ******** 2026-04-09 00:41:25.014410 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:25.014418 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:25.014427 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:25.014435 | orchestrator | 2026-04-09 00:41:25.014443 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-09 00:41:25.014452 | orchestrator | Thursday 09 April 2026 00:41:17 +0000 (0:00:00.238) 0:00:01.444 ******** 2026-04-09 00:41:25.014460 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:25.014470 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:25.014480 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:41:25.014489 | orchestrator | 2026-04-09 00:41:25.014498 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-09 00:41:25.014508 | orchestrator | Thursday 09 April 2026 00:41:18 +0000 (0:00:00.543) 0:00:01.987 ******** 2026-04-09 00:41:25.014517 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:25.014527 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:25.014536 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:25.014545 | orchestrator | 2026-04-09 00:41:25.014554 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-09 00:41:25.014563 | orchestrator | Thursday 09 April 2026 00:41:18 +0000 (0:00:00.240) 0:00:02.228 ******** 2026-04-09 00:41:25.014573 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:41:25.014587 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:41:25.014597 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:41:25.014607 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:41:25.014616 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:41:25.014625 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:41:25.014632 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:41:25.014639 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:41:25.014646 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:41:25.014654 | orchestrator | 2026-04-09 00:41:25.014661 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-09 00:41:25.014669 | orchestrator | Thursday 09 April 2026 00:41:20 +0000 (0:00:01.474) 0:00:03.702 ******** 2026-04-09 00:41:25.014676 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:41:25.014683 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:41:25.014690 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:41:25.014697 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:41:25.014704 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:41:25.014710 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:41:25.014717 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:41:25.014724 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:41:25.014731 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:41:25.014738 | orchestrator | 2026-04-09 00:41:25.014750 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-09 00:41:25.014757 | orchestrator | Thursday 09 April 2026 00:41:21 +0000 (0:00:01.343) 0:00:05.046 ******** 2026-04-09 00:41:25.014764 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:41:25.014771 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:41:25.014778 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:41:25.014785 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:41:25.014799 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:41:25.014807 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:41:25.014814 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:41:25.014821 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:41:25.014828 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:41:25.014835 | orchestrator | 2026-04-09 00:41:25.014842 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-09 00:41:25.014849 | orchestrator | Thursday 09 April 2026 00:41:23 +0000 (0:00:02.078) 0:00:07.125 ******** 2026-04-09 00:41:25.014856 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:41:25.014863 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:41:25.014869 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:41:25.014876 | orchestrator | 2026-04-09 00:41:25.014883 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-09 00:41:25.014890 | orchestrator | Thursday 09 April 2026 00:41:24 +0000 (0:00:00.578) 0:00:07.704 ******** 2026-04-09 00:41:25.014897 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:41:25.014903 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:41:25.014910 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:41:25.014918 | orchestrator | 2026-04-09 00:41:25.014925 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:41:25.014933 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:25.014941 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:25.014964 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:25.014971 | orchestrator | 2026-04-09 00:41:25.014979 | orchestrator | 2026-04-09 00:41:25.014986 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:41:25.014993 | orchestrator | Thursday 09 April 2026 00:41:24 +0000 (0:00:00.812) 0:00:08.517 ******** 2026-04-09 00:41:25.014999 | orchestrator | =============================================================================== 2026-04-09 00:41:25.015005 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.08s 2026-04-09 00:41:25.015011 | orchestrator | Check device availability ----------------------------------------------- 1.47s 2026-04-09 00:41:25.015017 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2026-04-09 00:41:25.015022 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.04s 2026-04-09 00:41:25.015028 | orchestrator | Request device events from the kernel ----------------------------------- 0.81s 2026-04-09 00:41:25.015034 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-04-09 00:41:25.015040 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-04-09 00:41:25.015045 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-04-09 00:41:25.015051 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-04-09 00:41:36.419628 | orchestrator | 2026-04-09 00:41:36 | INFO  | Prepare task for execution of facts. 2026-04-09 00:41:36.496306 | orchestrator | 2026-04-09 00:41:36 | INFO  | Task 982ffc14-8b10-4f61-bdbe-730e47b2a7ac (facts) was prepared for execution. 2026-04-09 00:41:36.496496 | orchestrator | 2026-04-09 00:41:36 | INFO  | It takes a moment until task 982ffc14-8b10-4f61-bdbe-730e47b2a7ac (facts) has been started and output is visible here. 2026-04-09 00:41:48.538693 | orchestrator | 2026-04-09 00:41:48.538812 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 00:41:48.538842 | orchestrator | 2026-04-09 00:41:48.538908 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:41:48.538923 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-04-09 00:41:48.538934 | orchestrator | ok: [testbed-manager] 2026-04-09 00:41:48.538946 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:41:48.538957 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:41:48.538967 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:41:48.538978 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:48.538989 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:48.538999 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:41:48.539010 | orchestrator | 2026-04-09 00:41:48.539021 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:41:48.539032 | orchestrator | Thursday 09 April 2026 00:41:41 +0000 (0:00:01.299) 0:00:01.638 ******** 2026-04-09 00:41:48.539043 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:41:48.539054 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:41:48.539065 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:41:48.539076 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:41:48.539087 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:48.539098 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:48.539108 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:48.539119 | orchestrator | 2026-04-09 00:41:48.539130 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:41:48.539157 | orchestrator | 2026-04-09 00:41:48.539169 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:41:48.539181 | orchestrator | Thursday 09 April 2026 00:41:42 +0000 (0:00:01.203) 0:00:02.842 ******** 2026-04-09 00:41:48.539192 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:41:48.539203 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:41:48.539214 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:41:48.539225 | orchestrator | ok: [testbed-manager] 2026-04-09 00:41:48.539235 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:41:48.539246 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:48.539257 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:48.539268 | orchestrator | 2026-04-09 00:41:48.539278 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:41:48.539289 | orchestrator | 2026-04-09 00:41:48.539300 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:41:48.539311 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:05.564) 0:00:08.407 ******** 2026-04-09 00:41:48.539349 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:41:48.539361 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:41:48.539372 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:41:48.539383 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:41:48.539394 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:48.539406 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:48.539424 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:48.539443 | orchestrator | 2026-04-09 00:41:48.539463 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:41:48.539483 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:48.539505 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:48.539524 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:48.539542 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:48.539562 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:48.539596 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:48.539616 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:48.539638 | orchestrator | 2026-04-09 00:41:48.539658 | orchestrator | 2026-04-09 00:41:48.539677 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:41:48.539691 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.448) 0:00:08.855 ******** 2026-04-09 00:41:48.539701 | orchestrator | =============================================================================== 2026-04-09 00:41:48.539712 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.56s 2026-04-09 00:41:48.539729 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-04-09 00:41:48.539751 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-04-09 00:41:48.539777 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-04-09 00:41:49.807017 | orchestrator | 2026-04-09 00:41:49 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-09 00:41:49.862077 | orchestrator | 2026-04-09 00:41:49 | INFO  | Task 26a369a0-8953-4f5a-b62c-b876bd3bae4d (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-09 00:41:49.862169 | orchestrator | 2026-04-09 00:41:49 | INFO  | It takes a moment until task 26a369a0-8953-4f5a-b62c-b876bd3bae4d (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-09 00:42:00.518440 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:42:00.518547 | orchestrator | 2.16.14 2026-04-09 00:42:00.518561 | orchestrator | 2026-04-09 00:42:00.519283 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:42:00.519300 | orchestrator | 2026-04-09 00:42:00.519311 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:42:00.519321 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-04-09 00:42:00.519364 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:00.519379 | orchestrator | 2026-04-09 00:42:00.519393 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:42:00.519407 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.211) 0:00:00.504 ******** 2026-04-09 00:42:00.519422 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:42:00.519433 | orchestrator | 2026-04-09 00:42:00.519441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519449 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.189) 0:00:00.693 ******** 2026-04-09 00:42:00.519468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:42:00.519477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:42:00.519485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:42:00.519493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:42:00.519501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:42:00.519509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:42:00.519517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:42:00.519525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:42:00.519533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 00:42:00.519541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:42:00.519568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:42:00.519577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:42:00.519585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:42:00.519592 | orchestrator | 2026-04-09 00:42:00.519601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519608 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.351) 0:00:01.045 ******** 2026-04-09 00:42:00.519617 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519625 | orchestrator | 2026-04-09 00:42:00.519633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519641 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.476) 0:00:01.521 ******** 2026-04-09 00:42:00.519649 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519657 | orchestrator | 2026-04-09 00:42:00.519665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519678 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.185) 0:00:01.707 ******** 2026-04-09 00:42:00.519686 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519694 | orchestrator | 2026-04-09 00:42:00.519702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519710 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.185) 0:00:01.892 ******** 2026-04-09 00:42:00.519719 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519727 | orchestrator | 2026-04-09 00:42:00.519735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519743 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.180) 0:00:02.072 ******** 2026-04-09 00:42:00.519751 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519759 | orchestrator | 2026-04-09 00:42:00.519767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519775 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.180) 0:00:02.253 ******** 2026-04-09 00:42:00.519784 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519792 | orchestrator | 2026-04-09 00:42:00.519800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519808 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.186) 0:00:02.440 ******** 2026-04-09 00:42:00.519816 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519824 | orchestrator | 2026-04-09 00:42:00.519832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519840 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.196) 0:00:02.637 ******** 2026-04-09 00:42:00.519848 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.519856 | orchestrator | 2026-04-09 00:42:00.519864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519872 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.184) 0:00:02.821 ******** 2026-04-09 00:42:00.519880 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610) 2026-04-09 00:42:00.519889 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610) 2026-04-09 00:42:00.519898 | orchestrator | 2026-04-09 00:42:00.519907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519935 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.396) 0:00:03.218 ******** 2026-04-09 00:42:00.519945 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289) 2026-04-09 00:42:00.519955 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289) 2026-04-09 00:42:00.519965 | orchestrator | 2026-04-09 00:42:00.519980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.519999 | orchestrator | Thursday 09 April 2026 00:41:57 +0000 (0:00:00.402) 0:00:03.621 ******** 2026-04-09 00:42:00.520009 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2) 2026-04-09 00:42:00.520019 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2) 2026-04-09 00:42:00.520029 | orchestrator | 2026-04-09 00:42:00.520038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.520048 | orchestrator | Thursday 09 April 2026 00:41:57 +0000 (0:00:00.600) 0:00:04.221 ******** 2026-04-09 00:42:00.520058 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d) 2026-04-09 00:42:00.520068 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d) 2026-04-09 00:42:00.520078 | orchestrator | 2026-04-09 00:42:00.520088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:00.520097 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.550) 0:00:04.771 ******** 2026-04-09 00:42:00.520107 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:42:00.520117 | orchestrator | 2026-04-09 00:42:00.520127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520137 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.574) 0:00:05.346 ******** 2026-04-09 00:42:00.520147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:42:00.520156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:42:00.520166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:42:00.520176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:42:00.520185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:42:00.520195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:42:00.520205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:42:00.520214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:42:00.520224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 00:42:00.520234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:42:00.520244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:42:00.520254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:42:00.520263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:42:00.520273 | orchestrator | 2026-04-09 00:42:00.520283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520293 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.341) 0:00:05.687 ******** 2026-04-09 00:42:00.520302 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.520312 | orchestrator | 2026-04-09 00:42:00.520322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520356 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.184) 0:00:05.871 ******** 2026-04-09 00:42:00.520366 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.520376 | orchestrator | 2026-04-09 00:42:00.520386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520395 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.171) 0:00:06.042 ******** 2026-04-09 00:42:00.520405 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.520423 | orchestrator | 2026-04-09 00:42:00.520432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520442 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.174) 0:00:06.217 ******** 2026-04-09 00:42:00.520452 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.520462 | orchestrator | 2026-04-09 00:42:00.520471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520481 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.158) 0:00:06.375 ******** 2026-04-09 00:42:00.520490 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.520500 | orchestrator | 2026-04-09 00:42:00.520510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520519 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.172) 0:00:06.547 ******** 2026-04-09 00:42:00.520529 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.520539 | orchestrator | 2026-04-09 00:42:00.520548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:00.520558 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.166) 0:00:06.714 ******** 2026-04-09 00:42:00.520568 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:00.520578 | orchestrator | 2026-04-09 00:42:00.520593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:07.167015 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.162) 0:00:06.877 ******** 2026-04-09 00:42:07.167127 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167142 | orchestrator | 2026-04-09 00:42:07.167154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:07.167165 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.167) 0:00:07.044 ******** 2026-04-09 00:42:07.167176 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 00:42:07.167188 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 00:42:07.167198 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 00:42:07.167209 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 00:42:07.167220 | orchestrator | 2026-04-09 00:42:07.167231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:07.167262 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.804) 0:00:07.848 ******** 2026-04-09 00:42:07.167271 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167277 | orchestrator | 2026-04-09 00:42:07.167284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:07.167290 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.176) 0:00:08.025 ******** 2026-04-09 00:42:07.167297 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167306 | orchestrator | 2026-04-09 00:42:07.167317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:07.167327 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.176) 0:00:08.202 ******** 2026-04-09 00:42:07.167424 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167435 | orchestrator | 2026-04-09 00:42:07.167446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:07.167456 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.195) 0:00:08.397 ******** 2026-04-09 00:42:07.167467 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167477 | orchestrator | 2026-04-09 00:42:07.167488 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:42:07.167499 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.179) 0:00:08.576 ******** 2026-04-09 00:42:07.167510 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:42:07.167521 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:42:07.167532 | orchestrator | 2026-04-09 00:42:07.167542 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:42:07.167553 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.139) 0:00:08.716 ******** 2026-04-09 00:42:07.167588 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167600 | orchestrator | 2026-04-09 00:42:07.167610 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:42:07.167620 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.118) 0:00:08.835 ******** 2026-04-09 00:42:07.167632 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167642 | orchestrator | 2026-04-09 00:42:07.167653 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:42:07.167663 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.110) 0:00:08.946 ******** 2026-04-09 00:42:07.167674 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167686 | orchestrator | 2026-04-09 00:42:07.167697 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:42:07.167709 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.109) 0:00:09.055 ******** 2026-04-09 00:42:07.167719 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:42:07.167730 | orchestrator | 2026-04-09 00:42:07.167740 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:42:07.167751 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.110) 0:00:09.166 ******** 2026-04-09 00:42:07.167763 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2293633-4853-52c3-92d9-c83407e5923f'}}) 2026-04-09 00:42:07.167771 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ee08831-7be2-5055-b7bf-21e225eea3cc'}}) 2026-04-09 00:42:07.167778 | orchestrator | 2026-04-09 00:42:07.167785 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:42:07.167793 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.158) 0:00:09.324 ******** 2026-04-09 00:42:07.167801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2293633-4853-52c3-92d9-c83407e5923f'}})  2026-04-09 00:42:07.167817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ee08831-7be2-5055-b7bf-21e225eea3cc'}})  2026-04-09 00:42:07.167829 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167837 | orchestrator | 2026-04-09 00:42:07.167845 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:42:07.167852 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.122) 0:00:09.446 ******** 2026-04-09 00:42:07.167860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2293633-4853-52c3-92d9-c83407e5923f'}})  2026-04-09 00:42:07.167867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ee08831-7be2-5055-b7bf-21e225eea3cc'}})  2026-04-09 00:42:07.167875 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167882 | orchestrator | 2026-04-09 00:42:07.167889 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:42:07.167897 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.134) 0:00:09.581 ******** 2026-04-09 00:42:07.167904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2293633-4853-52c3-92d9-c83407e5923f'}})  2026-04-09 00:42:07.167928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ee08831-7be2-5055-b7bf-21e225eea3cc'}})  2026-04-09 00:42:07.167936 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.167942 | orchestrator | 2026-04-09 00:42:07.167948 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:42:07.167954 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.257) 0:00:09.838 ******** 2026-04-09 00:42:07.167961 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:42:07.167967 | orchestrator | 2026-04-09 00:42:07.167973 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:42:07.167982 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.137) 0:00:09.975 ******** 2026-04-09 00:42:07.167992 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:42:07.168011 | orchestrator | 2026-04-09 00:42:07.168022 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:42:07.168032 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.133) 0:00:10.108 ******** 2026-04-09 00:42:07.168038 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.168045 | orchestrator | 2026-04-09 00:42:07.168051 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:42:07.168058 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.126) 0:00:10.235 ******** 2026-04-09 00:42:07.168064 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.168070 | orchestrator | 2026-04-09 00:42:07.168076 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:42:07.168082 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.104) 0:00:10.339 ******** 2026-04-09 00:42:07.168088 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.168094 | orchestrator | 2026-04-09 00:42:07.168100 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:42:07.168106 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.131) 0:00:10.471 ******** 2026-04-09 00:42:07.168113 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:42:07.168122 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:07.168132 | orchestrator |  "sdb": { 2026-04-09 00:42:07.168143 | orchestrator |  "osd_lvm_uuid": "d2293633-4853-52c3-92d9-c83407e5923f" 2026-04-09 00:42:07.168154 | orchestrator |  }, 2026-04-09 00:42:07.168165 | orchestrator |  "sdc": { 2026-04-09 00:42:07.168175 | orchestrator |  "osd_lvm_uuid": "9ee08831-7be2-5055-b7bf-21e225eea3cc" 2026-04-09 00:42:07.168185 | orchestrator |  } 2026-04-09 00:42:07.168196 | orchestrator |  } 2026-04-09 00:42:07.168205 | orchestrator | } 2026-04-09 00:42:07.168212 | orchestrator | 2026-04-09 00:42:07.168220 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:42:07.168231 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.119) 0:00:10.591 ******** 2026-04-09 00:42:07.168241 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.168251 | orchestrator | 2026-04-09 00:42:07.168262 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:42:07.168273 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.120) 0:00:10.711 ******** 2026-04-09 00:42:07.168283 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.168293 | orchestrator | 2026-04-09 00:42:07.168304 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:42:07.168314 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.097) 0:00:10.809 ******** 2026-04-09 00:42:07.168325 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:07.168360 | orchestrator | 2026-04-09 00:42:07.168371 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:42:07.168381 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.103) 0:00:10.912 ******** 2026-04-09 00:42:07.168390 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:42:07.168401 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:42:07.168411 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:07.168420 | orchestrator |  "sdb": { 2026-04-09 00:42:07.168429 | orchestrator |  "osd_lvm_uuid": "d2293633-4853-52c3-92d9-c83407e5923f" 2026-04-09 00:42:07.168438 | orchestrator |  }, 2026-04-09 00:42:07.168447 | orchestrator |  "sdc": { 2026-04-09 00:42:07.168458 | orchestrator |  "osd_lvm_uuid": "9ee08831-7be2-5055-b7bf-21e225eea3cc" 2026-04-09 00:42:07.168468 | orchestrator |  } 2026-04-09 00:42:07.168478 | orchestrator |  }, 2026-04-09 00:42:07.168488 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:42:07.168498 | orchestrator |  { 2026-04-09 00:42:07.168508 | orchestrator |  "data": "osd-block-d2293633-4853-52c3-92d9-c83407e5923f", 2026-04-09 00:42:07.168518 | orchestrator |  "data_vg": "ceph-d2293633-4853-52c3-92d9-c83407e5923f" 2026-04-09 00:42:07.168536 | orchestrator |  }, 2026-04-09 00:42:07.168546 | orchestrator |  { 2026-04-09 00:42:07.168556 | orchestrator |  "data": "osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc", 2026-04-09 00:42:07.168567 | orchestrator |  "data_vg": "ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc" 2026-04-09 00:42:07.168577 | orchestrator |  } 2026-04-09 00:42:07.168588 | orchestrator |  ] 2026-04-09 00:42:07.168597 | orchestrator |  } 2026-04-09 00:42:07.168607 | orchestrator | } 2026-04-09 00:42:07.168616 | orchestrator | 2026-04-09 00:42:07.168627 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:42:07.168637 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.194) 0:00:11.107 ******** 2026-04-09 00:42:07.168646 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:07.168655 | orchestrator | 2026-04-09 00:42:07.168665 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:42:07.168675 | orchestrator | 2026-04-09 00:42:07.168683 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:42:07.168694 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:02.000) 0:00:13.107 ******** 2026-04-09 00:42:07.168704 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:07.168714 | orchestrator | 2026-04-09 00:42:07.168724 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:42:07.168734 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.221) 0:00:13.328 ******** 2026-04-09 00:42:07.168743 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:07.168752 | orchestrator | 2026-04-09 00:42:07.168772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653498 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.200) 0:00:13.528 ******** 2026-04-09 00:42:13.653592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:42:13.653606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:42:13.653614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:42:13.653623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:42:13.653632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:42:13.653641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:42:13.653647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:42:13.653655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:42:13.653661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 00:42:13.653667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:42:13.653672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:42:13.653677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:42:13.653696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:42:13.653702 | orchestrator | 2026-04-09 00:42:13.653708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653713 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.370) 0:00:13.899 ******** 2026-04-09 00:42:13.653719 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653725 | orchestrator | 2026-04-09 00:42:13.653730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653736 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.184) 0:00:14.083 ******** 2026-04-09 00:42:13.653759 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653765 | orchestrator | 2026-04-09 00:42:13.653770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653775 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.170) 0:00:14.254 ******** 2026-04-09 00:42:13.653780 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653786 | orchestrator | 2026-04-09 00:42:13.653791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653796 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.161) 0:00:14.415 ******** 2026-04-09 00:42:13.653801 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653806 | orchestrator | 2026-04-09 00:42:13.653812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653817 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.172) 0:00:14.588 ******** 2026-04-09 00:42:13.653822 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653827 | orchestrator | 2026-04-09 00:42:13.653832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653837 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.178) 0:00:14.767 ******** 2026-04-09 00:42:13.653843 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653848 | orchestrator | 2026-04-09 00:42:13.653853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653858 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.419) 0:00:15.186 ******** 2026-04-09 00:42:13.653863 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653868 | orchestrator | 2026-04-09 00:42:13.653873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653879 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.167) 0:00:15.353 ******** 2026-04-09 00:42:13.653884 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.653889 | orchestrator | 2026-04-09 00:42:13.653894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653899 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.173) 0:00:15.526 ******** 2026-04-09 00:42:13.653905 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65) 2026-04-09 00:42:13.653911 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65) 2026-04-09 00:42:13.653916 | orchestrator | 2026-04-09 00:42:13.653922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653927 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.368) 0:00:15.895 ******** 2026-04-09 00:42:13.653932 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299) 2026-04-09 00:42:13.653937 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299) 2026-04-09 00:42:13.653942 | orchestrator | 2026-04-09 00:42:13.653947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653952 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.355) 0:00:16.250 ******** 2026-04-09 00:42:13.653958 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb) 2026-04-09 00:42:13.653963 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb) 2026-04-09 00:42:13.653968 | orchestrator | 2026-04-09 00:42:13.653973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.653992 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.366) 0:00:16.616 ******** 2026-04-09 00:42:13.653997 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965) 2026-04-09 00:42:13.654003 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965) 2026-04-09 00:42:13.654008 | orchestrator | 2026-04-09 00:42:13.654068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:13.654079 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.365) 0:00:16.981 ******** 2026-04-09 00:42:13.654090 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:42:13.654099 | orchestrator | 2026-04-09 00:42:13.654108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654115 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.302) 0:00:17.284 ******** 2026-04-09 00:42:13.654121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:42:13.654128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:42:13.654139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:42:13.654145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:42:13.654172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:42:13.654178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:42:13.654184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:42:13.654190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:42:13.654196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 00:42:13.654202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:42:13.654208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:42:13.654214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:42:13.654220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:42:13.654226 | orchestrator | 2026-04-09 00:42:13.654232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654238 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.357) 0:00:17.641 ******** 2026-04-09 00:42:13.654244 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654250 | orchestrator | 2026-04-09 00:42:13.654256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654262 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.160) 0:00:17.802 ******** 2026-04-09 00:42:13.654268 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654274 | orchestrator | 2026-04-09 00:42:13.654280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654286 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.512) 0:00:18.314 ******** 2026-04-09 00:42:13.654292 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654298 | orchestrator | 2026-04-09 00:42:13.654304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654309 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.176) 0:00:18.491 ******** 2026-04-09 00:42:13.654315 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654321 | orchestrator | 2026-04-09 00:42:13.654327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654333 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.157) 0:00:18.649 ******** 2026-04-09 00:42:13.654363 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654369 | orchestrator | 2026-04-09 00:42:13.654375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654381 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.166) 0:00:18.815 ******** 2026-04-09 00:42:13.654387 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654398 | orchestrator | 2026-04-09 00:42:13.654405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654410 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.175) 0:00:18.991 ******** 2026-04-09 00:42:13.654415 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654421 | orchestrator | 2026-04-09 00:42:13.654426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654431 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.175) 0:00:19.166 ******** 2026-04-09 00:42:13.654436 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.654441 | orchestrator | 2026-04-09 00:42:13.654446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654452 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.161) 0:00:19.328 ******** 2026-04-09 00:42:13.654457 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 00:42:13.654463 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 00:42:13.654468 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 00:42:13.654473 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 00:42:13.654479 | orchestrator | 2026-04-09 00:42:13.654484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:13.654489 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.560) 0:00:19.888 ******** 2026-04-09 00:42:13.654494 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918235 | orchestrator | 2026-04-09 00:42:18.918323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:18.918335 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.198) 0:00:20.086 ******** 2026-04-09 00:42:18.918389 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918395 | orchestrator | 2026-04-09 00:42:18.918400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:18.918404 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.178) 0:00:20.264 ******** 2026-04-09 00:42:18.918408 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918412 | orchestrator | 2026-04-09 00:42:18.918416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:18.918421 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.166) 0:00:20.431 ******** 2026-04-09 00:42:18.918425 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918428 | orchestrator | 2026-04-09 00:42:18.918432 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:42:18.918436 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.279) 0:00:20.710 ******** 2026-04-09 00:42:18.918441 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:42:18.918445 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:42:18.918449 | orchestrator | 2026-04-09 00:42:18.918453 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:42:18.918472 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.268) 0:00:20.979 ******** 2026-04-09 00:42:18.918479 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918486 | orchestrator | 2026-04-09 00:42:18.918493 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:42:18.918500 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.105) 0:00:21.085 ******** 2026-04-09 00:42:18.918506 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918513 | orchestrator | 2026-04-09 00:42:18.918519 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:42:18.918530 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.103) 0:00:21.189 ******** 2026-04-09 00:42:18.918536 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918542 | orchestrator | 2026-04-09 00:42:18.918548 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:42:18.918555 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.104) 0:00:21.293 ******** 2026-04-09 00:42:18.918581 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:18.918590 | orchestrator | 2026-04-09 00:42:18.918594 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:42:18.918598 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.105) 0:00:21.398 ******** 2026-04-09 00:42:18.918602 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd534d538-4d4e-5604-9605-85867297f7ab'}}) 2026-04-09 00:42:18.918607 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6327354e-b41f-514e-b570-068bfc1f3295'}}) 2026-04-09 00:42:18.918610 | orchestrator | 2026-04-09 00:42:18.918614 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:42:18.918618 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.139) 0:00:21.537 ******** 2026-04-09 00:42:18.918623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd534d538-4d4e-5604-9605-85867297f7ab'}})  2026-04-09 00:42:18.918628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6327354e-b41f-514e-b570-068bfc1f3295'}})  2026-04-09 00:42:18.918632 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918636 | orchestrator | 2026-04-09 00:42:18.918640 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:42:18.918643 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.125) 0:00:21.663 ******** 2026-04-09 00:42:18.918648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd534d538-4d4e-5604-9605-85867297f7ab'}})  2026-04-09 00:42:18.918655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6327354e-b41f-514e-b570-068bfc1f3295'}})  2026-04-09 00:42:18.918662 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918668 | orchestrator | 2026-04-09 00:42:18.918674 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:42:18.918680 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.135) 0:00:21.798 ******** 2026-04-09 00:42:18.918686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd534d538-4d4e-5604-9605-85867297f7ab'}})  2026-04-09 00:42:18.918692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6327354e-b41f-514e-b570-068bfc1f3295'}})  2026-04-09 00:42:18.918698 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918705 | orchestrator | 2026-04-09 00:42:18.918710 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:42:18.918717 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.122) 0:00:21.921 ******** 2026-04-09 00:42:18.918723 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:18.918729 | orchestrator | 2026-04-09 00:42:18.918735 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:42:18.918741 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.106) 0:00:22.028 ******** 2026-04-09 00:42:18.918746 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:18.918750 | orchestrator | 2026-04-09 00:42:18.918754 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:42:18.918758 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.115) 0:00:22.143 ******** 2026-04-09 00:42:18.918776 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918780 | orchestrator | 2026-04-09 00:42:18.918785 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:42:18.918789 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.115) 0:00:22.259 ******** 2026-04-09 00:42:18.918794 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918798 | orchestrator | 2026-04-09 00:42:18.918803 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:42:18.918809 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.278) 0:00:22.538 ******** 2026-04-09 00:42:18.918816 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918828 | orchestrator | 2026-04-09 00:42:18.918835 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:42:18.918841 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.132) 0:00:22.671 ******** 2026-04-09 00:42:18.918848 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:42:18.918854 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:18.918861 | orchestrator |  "sdb": { 2026-04-09 00:42:18.918867 | orchestrator |  "osd_lvm_uuid": "d534d538-4d4e-5604-9605-85867297f7ab" 2026-04-09 00:42:18.918874 | orchestrator |  }, 2026-04-09 00:42:18.918880 | orchestrator |  "sdc": { 2026-04-09 00:42:18.918887 | orchestrator |  "osd_lvm_uuid": "6327354e-b41f-514e-b570-068bfc1f3295" 2026-04-09 00:42:18.918892 | orchestrator |  } 2026-04-09 00:42:18.918897 | orchestrator |  } 2026-04-09 00:42:18.918901 | orchestrator | } 2026-04-09 00:42:18.918905 | orchestrator | 2026-04-09 00:42:18.918910 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:42:18.918914 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.113) 0:00:22.784 ******** 2026-04-09 00:42:18.918920 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918926 | orchestrator | 2026-04-09 00:42:18.918933 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:42:18.918939 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.111) 0:00:22.896 ******** 2026-04-09 00:42:18.918946 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918952 | orchestrator | 2026-04-09 00:42:18.918959 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:42:18.918964 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.102) 0:00:22.998 ******** 2026-04-09 00:42:18.918968 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:18.918972 | orchestrator | 2026-04-09 00:42:18.918977 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:42:18.918985 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.119) 0:00:23.118 ******** 2026-04-09 00:42:18.918990 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:42:18.918994 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:42:18.918998 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:18.919003 | orchestrator |  "sdb": { 2026-04-09 00:42:18.919007 | orchestrator |  "osd_lvm_uuid": "d534d538-4d4e-5604-9605-85867297f7ab" 2026-04-09 00:42:18.919012 | orchestrator |  }, 2026-04-09 00:42:18.919016 | orchestrator |  "sdc": { 2026-04-09 00:42:18.919020 | orchestrator |  "osd_lvm_uuid": "6327354e-b41f-514e-b570-068bfc1f3295" 2026-04-09 00:42:18.919024 | orchestrator |  } 2026-04-09 00:42:18.919029 | orchestrator |  }, 2026-04-09 00:42:18.919035 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:42:18.919041 | orchestrator |  { 2026-04-09 00:42:18.919048 | orchestrator |  "data": "osd-block-d534d538-4d4e-5604-9605-85867297f7ab", 2026-04-09 00:42:18.919054 | orchestrator |  "data_vg": "ceph-d534d538-4d4e-5604-9605-85867297f7ab" 2026-04-09 00:42:18.919060 | orchestrator |  }, 2026-04-09 00:42:18.919067 | orchestrator |  { 2026-04-09 00:42:18.919073 | orchestrator |  "data": "osd-block-6327354e-b41f-514e-b570-068bfc1f3295", 2026-04-09 00:42:18.919077 | orchestrator |  "data_vg": "ceph-6327354e-b41f-514e-b570-068bfc1f3295" 2026-04-09 00:42:18.919082 | orchestrator |  } 2026-04-09 00:42:18.919086 | orchestrator |  ] 2026-04-09 00:42:18.919090 | orchestrator |  } 2026-04-09 00:42:18.919094 | orchestrator | } 2026-04-09 00:42:18.919099 | orchestrator | 2026-04-09 00:42:18.919103 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:42:18.919108 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.171) 0:00:23.289 ******** 2026-04-09 00:42:18.919112 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:18.919116 | orchestrator | 2026-04-09 00:42:18.919127 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:42:18.919132 | orchestrator | 2026-04-09 00:42:18.919136 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:42:18.919141 | orchestrator | Thursday 09 April 2026 00:42:17 +0000 (0:00:00.861) 0:00:24.151 ******** 2026-04-09 00:42:18.919145 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:18.919149 | orchestrator | 2026-04-09 00:42:18.919153 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:42:18.919157 | orchestrator | Thursday 09 April 2026 00:42:18 +0000 (0:00:00.334) 0:00:24.485 ******** 2026-04-09 00:42:18.919161 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:18.919164 | orchestrator | 2026-04-09 00:42:18.919168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:18.919172 | orchestrator | Thursday 09 April 2026 00:42:18 +0000 (0:00:00.530) 0:00:25.016 ******** 2026-04-09 00:42:18.919176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:42:18.919179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:42:18.919183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:42:18.919187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:42:18.919190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:42:18.919199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:42:27.643489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:42:27.643539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:42:27.643552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 00:42:27.643563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:42:27.643574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:42:27.643585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:42:27.643596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:42:27.643607 | orchestrator | 2026-04-09 00:42:27.643618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.643630 | orchestrator | Thursday 09 April 2026 00:42:18 +0000 (0:00:00.336) 0:00:25.353 ******** 2026-04-09 00:42:27.643641 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643653 | orchestrator | 2026-04-09 00:42:27.643664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.643675 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.174) 0:00:25.527 ******** 2026-04-09 00:42:27.643686 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643697 | orchestrator | 2026-04-09 00:42:27.643708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.643718 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.190) 0:00:25.717 ******** 2026-04-09 00:42:27.643736 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643753 | orchestrator | 2026-04-09 00:42:27.643773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.643793 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.180) 0:00:25.898 ******** 2026-04-09 00:42:27.643811 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643830 | orchestrator | 2026-04-09 00:42:27.643849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.643869 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.152) 0:00:26.051 ******** 2026-04-09 00:42:27.643911 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643930 | orchestrator | 2026-04-09 00:42:27.643947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.643959 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.197) 0:00:26.249 ******** 2026-04-09 00:42:27.643970 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643980 | orchestrator | 2026-04-09 00:42:27.643991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.644001 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.159) 0:00:26.408 ******** 2026-04-09 00:42:27.644012 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.644022 | orchestrator | 2026-04-09 00:42:27.644034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.644045 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.204) 0:00:26.613 ******** 2026-04-09 00:42:27.644062 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.644080 | orchestrator | 2026-04-09 00:42:27.644099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.644117 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.171) 0:00:26.784 ******** 2026-04-09 00:42:27.644137 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168) 2026-04-09 00:42:27.644157 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168) 2026-04-09 00:42:27.644176 | orchestrator | 2026-04-09 00:42:27.644190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.644203 | orchestrator | Thursday 09 April 2026 00:42:21 +0000 (0:00:00.732) 0:00:27.516 ******** 2026-04-09 00:42:27.644232 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2) 2026-04-09 00:42:27.644252 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2) 2026-04-09 00:42:27.644271 | orchestrator | 2026-04-09 00:42:27.644290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.644310 | orchestrator | Thursday 09 April 2026 00:42:22 +0000 (0:00:01.041) 0:00:28.557 ******** 2026-04-09 00:42:27.644330 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f) 2026-04-09 00:42:27.644418 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f) 2026-04-09 00:42:27.644446 | orchestrator | 2026-04-09 00:42:27.644467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.644485 | orchestrator | Thursday 09 April 2026 00:42:22 +0000 (0:00:00.431) 0:00:28.989 ******** 2026-04-09 00:42:27.644502 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669) 2026-04-09 00:42:27.644518 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669) 2026-04-09 00:42:27.644532 | orchestrator | 2026-04-09 00:42:27.644547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:27.644563 | orchestrator | Thursday 09 April 2026 00:42:23 +0000 (0:00:00.529) 0:00:29.518 ******** 2026-04-09 00:42:27.644579 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:42:27.644595 | orchestrator | 2026-04-09 00:42:27.644612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.644645 | orchestrator | Thursday 09 April 2026 00:42:23 +0000 (0:00:00.340) 0:00:29.859 ******** 2026-04-09 00:42:27.644657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:42:27.644667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:42:27.644677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:42:27.644686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:42:27.644707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:42:27.644717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:42:27.644726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:42:27.644735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:42:27.644745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 00:42:27.644754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:42:27.644763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:42:27.644773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:42:27.644782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:42:27.644792 | orchestrator | 2026-04-09 00:42:27.644803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.644819 | orchestrator | Thursday 09 April 2026 00:42:23 +0000 (0:00:00.457) 0:00:30.316 ******** 2026-04-09 00:42:27.644836 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.644863 | orchestrator | 2026-04-09 00:42:27.644880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.644896 | orchestrator | Thursday 09 April 2026 00:42:24 +0000 (0:00:00.212) 0:00:30.529 ******** 2026-04-09 00:42:27.644911 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.644928 | orchestrator | 2026-04-09 00:42:27.644943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.644960 | orchestrator | Thursday 09 April 2026 00:42:24 +0000 (0:00:00.222) 0:00:30.752 ******** 2026-04-09 00:42:27.644975 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.644990 | orchestrator | 2026-04-09 00:42:27.645004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645019 | orchestrator | Thursday 09 April 2026 00:42:24 +0000 (0:00:00.206) 0:00:30.958 ******** 2026-04-09 00:42:27.645035 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645052 | orchestrator | 2026-04-09 00:42:27.645069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645086 | orchestrator | Thursday 09 April 2026 00:42:24 +0000 (0:00:00.229) 0:00:31.188 ******** 2026-04-09 00:42:27.645101 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645111 | orchestrator | 2026-04-09 00:42:27.645121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645130 | orchestrator | Thursday 09 April 2026 00:42:25 +0000 (0:00:00.193) 0:00:31.381 ******** 2026-04-09 00:42:27.645140 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645149 | orchestrator | 2026-04-09 00:42:27.645159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645168 | orchestrator | Thursday 09 April 2026 00:42:25 +0000 (0:00:00.714) 0:00:32.096 ******** 2026-04-09 00:42:27.645178 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645187 | orchestrator | 2026-04-09 00:42:27.645196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645206 | orchestrator | Thursday 09 April 2026 00:42:25 +0000 (0:00:00.192) 0:00:32.288 ******** 2026-04-09 00:42:27.645215 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645225 | orchestrator | 2026-04-09 00:42:27.645234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645243 | orchestrator | Thursday 09 April 2026 00:42:26 +0000 (0:00:00.212) 0:00:32.500 ******** 2026-04-09 00:42:27.645253 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 00:42:27.645273 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 00:42:27.645282 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 00:42:27.645296 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 00:42:27.645312 | orchestrator | 2026-04-09 00:42:27.645326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645341 | orchestrator | Thursday 09 April 2026 00:42:26 +0000 (0:00:00.687) 0:00:33.188 ******** 2026-04-09 00:42:27.645379 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645394 | orchestrator | 2026-04-09 00:42:27.645409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645427 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.187) 0:00:33.375 ******** 2026-04-09 00:42:27.645442 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645459 | orchestrator | 2026-04-09 00:42:27.645469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645478 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.207) 0:00:33.583 ******** 2026-04-09 00:42:27.645488 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645498 | orchestrator | 2026-04-09 00:42:27.645507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.645516 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.186) 0:00:33.769 ******** 2026-04-09 00:42:27.645526 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.645536 | orchestrator | 2026-04-09 00:42:27.645554 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:42:31.456228 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.233) 0:00:34.002 ******** 2026-04-09 00:42:31.456344 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:42:31.456430 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:42:31.456449 | orchestrator | 2026-04-09 00:42:31.456465 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:42:31.456482 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.163) 0:00:34.166 ******** 2026-04-09 00:42:31.456498 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.456515 | orchestrator | 2026-04-09 00:42:31.456533 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:42:31.456548 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.146) 0:00:34.313 ******** 2026-04-09 00:42:31.456587 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.456605 | orchestrator | 2026-04-09 00:42:31.456622 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:42:31.456638 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.127) 0:00:34.441 ******** 2026-04-09 00:42:31.456655 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.456669 | orchestrator | 2026-04-09 00:42:31.456681 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:42:31.456691 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.115) 0:00:34.556 ******** 2026-04-09 00:42:31.456700 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:31.456711 | orchestrator | 2026-04-09 00:42:31.456721 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:42:31.456731 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.344) 0:00:34.900 ******** 2026-04-09 00:42:31.456741 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a254e30f-06f2-55f8-8a7e-64e382968b4c'}}) 2026-04-09 00:42:31.456757 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a6a3488f-30e9-5ba3-9724-16c1df88c443'}}) 2026-04-09 00:42:31.456769 | orchestrator | 2026-04-09 00:42:31.456780 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:42:31.456792 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.192) 0:00:35.092 ******** 2026-04-09 00:42:31.456803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a254e30f-06f2-55f8-8a7e-64e382968b4c'}})  2026-04-09 00:42:31.456836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a6a3488f-30e9-5ba3-9724-16c1df88c443'}})  2026-04-09 00:42:31.456848 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.456859 | orchestrator | 2026-04-09 00:42:31.456871 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:42:31.456882 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.139) 0:00:35.231 ******** 2026-04-09 00:42:31.456893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a254e30f-06f2-55f8-8a7e-64e382968b4c'}})  2026-04-09 00:42:31.456904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a6a3488f-30e9-5ba3-9724-16c1df88c443'}})  2026-04-09 00:42:31.456916 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.456927 | orchestrator | 2026-04-09 00:42:31.456938 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:42:31.456949 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.157) 0:00:35.389 ******** 2026-04-09 00:42:31.456960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a254e30f-06f2-55f8-8a7e-64e382968b4c'}})  2026-04-09 00:42:31.456972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a6a3488f-30e9-5ba3-9724-16c1df88c443'}})  2026-04-09 00:42:31.456983 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.456994 | orchestrator | 2026-04-09 00:42:31.457005 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:42:31.457016 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.186) 0:00:35.575 ******** 2026-04-09 00:42:31.457027 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:31.457038 | orchestrator | 2026-04-09 00:42:31.457049 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:42:31.457061 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.146) 0:00:35.722 ******** 2026-04-09 00:42:31.457072 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:31.457083 | orchestrator | 2026-04-09 00:42:31.457095 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:42:31.457105 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.140) 0:00:35.863 ******** 2026-04-09 00:42:31.457116 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.457128 | orchestrator | 2026-04-09 00:42:31.457139 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:42:31.457148 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.102) 0:00:35.966 ******** 2026-04-09 00:42:31.457158 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.457168 | orchestrator | 2026-04-09 00:42:31.457177 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:42:31.457187 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.115) 0:00:36.082 ******** 2026-04-09 00:42:31.457196 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.457206 | orchestrator | 2026-04-09 00:42:31.457215 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:42:31.457225 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.107) 0:00:36.190 ******** 2026-04-09 00:42:31.457234 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:42:31.457244 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:31.457254 | orchestrator |  "sdb": { 2026-04-09 00:42:31.457283 | orchestrator |  "osd_lvm_uuid": "a254e30f-06f2-55f8-8a7e-64e382968b4c" 2026-04-09 00:42:31.457294 | orchestrator |  }, 2026-04-09 00:42:31.457304 | orchestrator |  "sdc": { 2026-04-09 00:42:31.457314 | orchestrator |  "osd_lvm_uuid": "a6a3488f-30e9-5ba3-9724-16c1df88c443" 2026-04-09 00:42:31.457323 | orchestrator |  } 2026-04-09 00:42:31.457333 | orchestrator |  } 2026-04-09 00:42:31.457343 | orchestrator | } 2026-04-09 00:42:31.457392 | orchestrator | 2026-04-09 00:42:31.457412 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:42:31.457421 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.123) 0:00:36.313 ******** 2026-04-09 00:42:31.457431 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.457441 | orchestrator | 2026-04-09 00:42:31.457450 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:42:31.457460 | orchestrator | Thursday 09 April 2026 00:42:30 +0000 (0:00:00.102) 0:00:36.415 ******** 2026-04-09 00:42:31.457470 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.457479 | orchestrator | 2026-04-09 00:42:31.457489 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:42:31.457498 | orchestrator | Thursday 09 April 2026 00:42:30 +0000 (0:00:00.244) 0:00:36.660 ******** 2026-04-09 00:42:31.457508 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:31.457518 | orchestrator | 2026-04-09 00:42:31.457527 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:42:31.457537 | orchestrator | Thursday 09 April 2026 00:42:30 +0000 (0:00:00.121) 0:00:36.782 ******** 2026-04-09 00:42:31.457547 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:42:31.457557 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:42:31.457567 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:31.457576 | orchestrator |  "sdb": { 2026-04-09 00:42:31.457586 | orchestrator |  "osd_lvm_uuid": "a254e30f-06f2-55f8-8a7e-64e382968b4c" 2026-04-09 00:42:31.457596 | orchestrator |  }, 2026-04-09 00:42:31.457605 | orchestrator |  "sdc": { 2026-04-09 00:42:31.457615 | orchestrator |  "osd_lvm_uuid": "a6a3488f-30e9-5ba3-9724-16c1df88c443" 2026-04-09 00:42:31.457625 | orchestrator |  } 2026-04-09 00:42:31.457634 | orchestrator |  }, 2026-04-09 00:42:31.457644 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:42:31.457654 | orchestrator |  { 2026-04-09 00:42:31.457664 | orchestrator |  "data": "osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c", 2026-04-09 00:42:31.457673 | orchestrator |  "data_vg": "ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c" 2026-04-09 00:42:31.457683 | orchestrator |  }, 2026-04-09 00:42:31.457697 | orchestrator |  { 2026-04-09 00:42:31.457707 | orchestrator |  "data": "osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443", 2026-04-09 00:42:31.457717 | orchestrator |  "data_vg": "ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443" 2026-04-09 00:42:31.457726 | orchestrator |  } 2026-04-09 00:42:31.457736 | orchestrator |  ] 2026-04-09 00:42:31.457746 | orchestrator |  } 2026-04-09 00:42:31.457755 | orchestrator | } 2026-04-09 00:42:31.457770 | orchestrator | 2026-04-09 00:42:31.457785 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:42:31.457798 | orchestrator | Thursday 09 April 2026 00:42:30 +0000 (0:00:00.193) 0:00:36.976 ******** 2026-04-09 00:42:31.457818 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:31.457841 | orchestrator | 2026-04-09 00:42:31.457856 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:42:31.457871 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:42:31.457888 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:42:31.457903 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:42:31.457917 | orchestrator | 2026-04-09 00:42:31.457932 | orchestrator | 2026-04-09 00:42:31.457947 | orchestrator | 2026-04-09 00:42:31.457962 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:42:31.457978 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.828) 0:00:37.805 ******** 2026-04-09 00:42:31.458011 | orchestrator | =============================================================================== 2026-04-09 00:42:31.458108 | orchestrator | Write configuration file ------------------------------------------------ 3.69s 2026-04-09 00:42:31.458125 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2026-04-09 00:42:31.458150 | orchestrator | Add known links to the list of available block devices ------------------ 1.06s 2026-04-09 00:42:31.458166 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-04-09 00:42:31.458183 | orchestrator | Get initial list of available block devices ----------------------------- 0.92s 2026-04-09 00:42:31.458198 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-04-09 00:42:31.458214 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-04-09 00:42:31.458230 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-04-09 00:42:31.458246 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-09 00:42:31.458263 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-04-09 00:42:31.458279 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-04-09 00:42:31.458294 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-09 00:42:31.458311 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2026-04-09 00:42:31.458343 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.57s 2026-04-09 00:42:31.690070 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-04-09 00:42:31.690176 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.56s 2026-04-09 00:42:31.690192 | orchestrator | Print configuration data ------------------------------------------------ 0.56s 2026-04-09 00:42:31.690205 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-04-09 00:42:31.690216 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-04-09 00:42:31.690228 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2026-04-09 00:42:53.287574 | orchestrator | 2026-04-09 00:42:53 | INFO  | Task 09efa308-33cf-4c61-8e26-d34cdc851586 (sync inventory) is running in background. Output coming soon. 2026-04-09 00:43:20.034732 | orchestrator | 2026-04-09 00:42:54 | INFO  | Starting group_vars file reorganization 2026-04-09 00:43:20.034842 | orchestrator | 2026-04-09 00:42:54 | INFO  | Moved 0 file(s) to their respective directories 2026-04-09 00:43:20.034859 | orchestrator | 2026-04-09 00:42:54 | INFO  | Group_vars file reorganization completed 2026-04-09 00:43:20.034873 | orchestrator | 2026-04-09 00:42:57 | INFO  | Starting variable preparation from inventory 2026-04-09 00:43:20.034887 | orchestrator | 2026-04-09 00:43:00 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-09 00:43:20.034899 | orchestrator | 2026-04-09 00:43:00 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-09 00:43:20.034966 | orchestrator | 2026-04-09 00:43:00 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-09 00:43:20.034982 | orchestrator | 2026-04-09 00:43:00 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-09 00:43:20.034995 | orchestrator | 2026-04-09 00:43:00 | INFO  | Variable preparation completed 2026-04-09 00:43:20.035067 | orchestrator | 2026-04-09 00:43:01 | INFO  | Starting inventory overwrite handling 2026-04-09 00:43:20.035081 | orchestrator | 2026-04-09 00:43:01 | INFO  | Handling group overwrites in 99-overwrite 2026-04-09 00:43:20.035093 | orchestrator | 2026-04-09 00:43:01 | INFO  | Removing group frr:children from 60-generic 2026-04-09 00:43:20.035125 | orchestrator | 2026-04-09 00:43:01 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-09 00:43:20.035133 | orchestrator | 2026-04-09 00:43:01 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-09 00:43:20.035141 | orchestrator | 2026-04-09 00:43:01 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-09 00:43:20.035148 | orchestrator | 2026-04-09 00:43:01 | INFO  | Handling group overwrites in 20-roles 2026-04-09 00:43:20.035156 | orchestrator | 2026-04-09 00:43:01 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-09 00:43:20.035163 | orchestrator | 2026-04-09 00:43:01 | INFO  | Removed 5 group(s) in total 2026-04-09 00:43:20.035170 | orchestrator | 2026-04-09 00:43:01 | INFO  | Inventory overwrite handling completed 2026-04-09 00:43:20.035178 | orchestrator | 2026-04-09 00:43:02 | INFO  | Starting merge of inventory files 2026-04-09 00:43:20.035185 | orchestrator | 2026-04-09 00:43:02 | INFO  | Inventory files merged successfully 2026-04-09 00:43:20.035192 | orchestrator | 2026-04-09 00:43:05 | INFO  | Generating minified hosts file 2026-04-09 00:43:20.035199 | orchestrator | 2026-04-09 00:43:07 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-09 00:43:20.035208 | orchestrator | 2026-04-09 00:43:07 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-09 00:43:20.035215 | orchestrator | 2026-04-09 00:43:08 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-09 00:43:20.035222 | orchestrator | 2026-04-09 00:43:18 | INFO  | Successfully wrote ClusterShell configuration 2026-04-09 00:43:20.035230 | orchestrator | [master 84eda39] 2026-04-09-00-43 2026-04-09 00:43:20.035238 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-09 00:43:20.035247 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-09 00:43:20.035254 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-09 00:43:20.035262 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-09 00:43:21.231347 | orchestrator | 2026-04-09 00:43:21 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-09 00:43:21.286225 | orchestrator | 2026-04-09 00:43:21 | INFO  | Task bb84343a-fa4b-47a8-94dd-dd85f5e853a0 (ceph-create-lvm-devices) was prepared for execution. 2026-04-09 00:43:21.286320 | orchestrator | 2026-04-09 00:43:21 | INFO  | It takes a moment until task bb84343a-fa4b-47a8-94dd-dd85f5e853a0 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-09 00:43:33.028050 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:43:33.028123 | orchestrator | 2.16.14 2026-04-09 00:43:33.028131 | orchestrator | 2026-04-09 00:43:33.028136 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:43:33.028141 | orchestrator | 2026-04-09 00:43:33.028145 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:43:33.028150 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-04-09 00:43:33.028155 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:43:33.028159 | orchestrator | 2026-04-09 00:43:33.028163 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:43:33.028167 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.237) 0:00:00.512 ******** 2026-04-09 00:43:33.028171 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:33.028176 | orchestrator | 2026-04-09 00:43:33.028180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028184 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.209) 0:00:00.722 ******** 2026-04-09 00:43:33.028202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:43:33.028206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:43:33.028210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:43:33.028214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:43:33.028218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:43:33.028222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:43:33.028226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:43:33.028229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:43:33.028234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 00:43:33.028238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:43:33.028241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:43:33.028245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:43:33.028249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:43:33.028253 | orchestrator | 2026-04-09 00:43:33.028257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028260 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.376) 0:00:01.098 ******** 2026-04-09 00:43:33.028264 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028268 | orchestrator | 2026-04-09 00:43:33.028272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028276 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.464) 0:00:01.562 ******** 2026-04-09 00:43:33.028280 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028283 | orchestrator | 2026-04-09 00:43:33.028287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028291 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.203) 0:00:01.766 ******** 2026-04-09 00:43:33.028307 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028311 | orchestrator | 2026-04-09 00:43:33.028315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028319 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.203) 0:00:01.969 ******** 2026-04-09 00:43:33.028322 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028326 | orchestrator | 2026-04-09 00:43:33.028330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028334 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.183) 0:00:02.152 ******** 2026-04-09 00:43:33.028338 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028342 | orchestrator | 2026-04-09 00:43:33.028345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028349 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.175) 0:00:02.328 ******** 2026-04-09 00:43:33.028353 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028357 | orchestrator | 2026-04-09 00:43:33.028361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028365 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.172) 0:00:02.501 ******** 2026-04-09 00:43:33.028369 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028432 | orchestrator | 2026-04-09 00:43:33.028439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028445 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.198) 0:00:02.699 ******** 2026-04-09 00:43:33.028452 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028464 | orchestrator | 2026-04-09 00:43:33.028469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028473 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.202) 0:00:02.902 ******** 2026-04-09 00:43:33.028477 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610) 2026-04-09 00:43:33.028482 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610) 2026-04-09 00:43:33.028486 | orchestrator | 2026-04-09 00:43:33.028490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028504 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.409) 0:00:03.311 ******** 2026-04-09 00:43:33.028508 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289) 2026-04-09 00:43:33.028512 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289) 2026-04-09 00:43:33.028516 | orchestrator | 2026-04-09 00:43:33.028520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028524 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.453) 0:00:03.765 ******** 2026-04-09 00:43:33.028528 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2) 2026-04-09 00:43:33.028531 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2) 2026-04-09 00:43:33.028535 | orchestrator | 2026-04-09 00:43:33.028539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028543 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.733) 0:00:04.498 ******** 2026-04-09 00:43:33.028547 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d) 2026-04-09 00:43:33.028551 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d) 2026-04-09 00:43:33.028554 | orchestrator | 2026-04-09 00:43:33.028558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:33.028569 | orchestrator | Thursday 09 April 2026 00:43:30 +0000 (0:00:00.658) 0:00:05.157 ******** 2026-04-09 00:43:33.028574 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:43:33.028580 | orchestrator | 2026-04-09 00:43:33.028586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028597 | orchestrator | Thursday 09 April 2026 00:43:31 +0000 (0:00:00.690) 0:00:05.848 ******** 2026-04-09 00:43:33.028603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:43:33.028609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:43:33.028615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:43:33.028622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:43:33.028629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:43:33.028635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:43:33.028642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:43:33.028649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:43:33.028656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 00:43:33.028663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:43:33.028670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:43:33.028677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:43:33.028690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:43:33.028695 | orchestrator | 2026-04-09 00:43:33.028700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028705 | orchestrator | Thursday 09 April 2026 00:43:31 +0000 (0:00:00.430) 0:00:06.278 ******** 2026-04-09 00:43:33.028709 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028714 | orchestrator | 2026-04-09 00:43:33.028718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028723 | orchestrator | Thursday 09 April 2026 00:43:31 +0000 (0:00:00.192) 0:00:06.471 ******** 2026-04-09 00:43:33.028727 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028732 | orchestrator | 2026-04-09 00:43:33.028736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028741 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.190) 0:00:06.662 ******** 2026-04-09 00:43:33.028745 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028750 | orchestrator | 2026-04-09 00:43:33.028754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028759 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.186) 0:00:06.848 ******** 2026-04-09 00:43:33.028763 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028768 | orchestrator | 2026-04-09 00:43:33.028772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028777 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.189) 0:00:07.038 ******** 2026-04-09 00:43:33.028781 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028785 | orchestrator | 2026-04-09 00:43:33.028790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028794 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.190) 0:00:07.229 ******** 2026-04-09 00:43:33.028799 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028803 | orchestrator | 2026-04-09 00:43:33.028808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:33.028812 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.192) 0:00:07.421 ******** 2026-04-09 00:43:33.028817 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:33.028822 | orchestrator | 2026-04-09 00:43:33.028830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:41.466824 | orchestrator | Thursday 09 April 2026 00:43:33 +0000 (0:00:00.185) 0:00:07.606 ******** 2026-04-09 00:43:41.466925 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.466939 | orchestrator | 2026-04-09 00:43:41.466948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:41.466956 | orchestrator | Thursday 09 April 2026 00:43:33 +0000 (0:00:00.195) 0:00:07.802 ******** 2026-04-09 00:43:41.466966 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 00:43:41.466975 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 00:43:41.466983 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 00:43:41.466990 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 00:43:41.466999 | orchestrator | 2026-04-09 00:43:41.467006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:41.467014 | orchestrator | Thursday 09 April 2026 00:43:34 +0000 (0:00:01.331) 0:00:09.134 ******** 2026-04-09 00:43:41.467022 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467030 | orchestrator | 2026-04-09 00:43:41.467037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:41.467045 | orchestrator | Thursday 09 April 2026 00:43:34 +0000 (0:00:00.209) 0:00:09.343 ******** 2026-04-09 00:43:41.467054 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467061 | orchestrator | 2026-04-09 00:43:41.467069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:41.467100 | orchestrator | Thursday 09 April 2026 00:43:34 +0000 (0:00:00.215) 0:00:09.559 ******** 2026-04-09 00:43:41.467108 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467117 | orchestrator | 2026-04-09 00:43:41.467125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:41.467133 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.208) 0:00:09.767 ******** 2026-04-09 00:43:41.467141 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467148 | orchestrator | 2026-04-09 00:43:41.467153 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:43:41.467158 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.250) 0:00:10.018 ******** 2026-04-09 00:43:41.467163 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467168 | orchestrator | 2026-04-09 00:43:41.467173 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:43:41.467178 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.134) 0:00:10.152 ******** 2026-04-09 00:43:41.467183 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2293633-4853-52c3-92d9-c83407e5923f'}}) 2026-04-09 00:43:41.467189 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ee08831-7be2-5055-b7bf-21e225eea3cc'}}) 2026-04-09 00:43:41.467193 | orchestrator | 2026-04-09 00:43:41.467198 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:43:41.467203 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.209) 0:00:10.362 ******** 2026-04-09 00:43:41.467209 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'}) 2026-04-09 00:43:41.467215 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'}) 2026-04-09 00:43:41.467220 | orchestrator | 2026-04-09 00:43:41.467225 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:43:41.467230 | orchestrator | Thursday 09 April 2026 00:43:37 +0000 (0:00:02.029) 0:00:12.392 ******** 2026-04-09 00:43:41.467235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467254 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467259 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467264 | orchestrator | 2026-04-09 00:43:41.467269 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:43:41.467274 | orchestrator | Thursday 09 April 2026 00:43:38 +0000 (0:00:00.211) 0:00:12.603 ******** 2026-04-09 00:43:41.467278 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'}) 2026-04-09 00:43:41.467283 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'}) 2026-04-09 00:43:41.467288 | orchestrator | 2026-04-09 00:43:41.467293 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:43:41.467298 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:01.444) 0:00:14.048 ******** 2026-04-09 00:43:41.467302 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467312 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467317 | orchestrator | 2026-04-09 00:43:41.467322 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:43:41.467332 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:00.193) 0:00:14.241 ******** 2026-04-09 00:43:41.467351 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467356 | orchestrator | 2026-04-09 00:43:41.467361 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:43:41.467366 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:00.134) 0:00:14.376 ******** 2026-04-09 00:43:41.467371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467406 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467411 | orchestrator | 2026-04-09 00:43:41.467417 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:43:41.467423 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.351) 0:00:14.728 ******** 2026-04-09 00:43:41.467429 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467434 | orchestrator | 2026-04-09 00:43:41.467439 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:43:41.467445 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.158) 0:00:14.887 ******** 2026-04-09 00:43:41.467450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467461 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467467 | orchestrator | 2026-04-09 00:43:41.467476 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:43:41.467481 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.145) 0:00:15.033 ******** 2026-04-09 00:43:41.467487 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467493 | orchestrator | 2026-04-09 00:43:41.467498 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:43:41.467503 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.140) 0:00:15.173 ******** 2026-04-09 00:43:41.467509 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467520 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467526 | orchestrator | 2026-04-09 00:43:41.467531 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:43:41.467537 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.161) 0:00:15.335 ******** 2026-04-09 00:43:41.467543 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:41.467548 | orchestrator | 2026-04-09 00:43:41.467554 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:43:41.467559 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.135) 0:00:15.470 ******** 2026-04-09 00:43:41.467565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467577 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467582 | orchestrator | 2026-04-09 00:43:41.467588 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:43:41.467598 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.150) 0:00:15.621 ******** 2026-04-09 00:43:41.467604 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467615 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467620 | orchestrator | 2026-04-09 00:43:41.467625 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:43:41.467630 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.145) 0:00:15.766 ******** 2026-04-09 00:43:41.467634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:41.467639 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:41.467644 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467649 | orchestrator | 2026-04-09 00:43:41.467654 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:43:41.467658 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.146) 0:00:15.913 ******** 2026-04-09 00:43:41.467663 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:41.467668 | orchestrator | 2026-04-09 00:43:41.467673 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:43:41.467681 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.135) 0:00:16.048 ******** 2026-04-09 00:43:47.207418 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.207515 | orchestrator | 2026-04-09 00:43:47.207527 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:43:47.207536 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.136) 0:00:16.184 ******** 2026-04-09 00:43:47.207543 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.207551 | orchestrator | 2026-04-09 00:43:47.207557 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:43:47.207564 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.141) 0:00:16.326 ******** 2026-04-09 00:43:47.207571 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:47.207579 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:43:47.207586 | orchestrator | } 2026-04-09 00:43:47.207594 | orchestrator | 2026-04-09 00:43:47.207601 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:43:47.207607 | orchestrator | Thursday 09 April 2026 00:43:42 +0000 (0:00:00.321) 0:00:16.647 ******** 2026-04-09 00:43:47.207613 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:47.207620 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:43:47.207627 | orchestrator | } 2026-04-09 00:43:47.207633 | orchestrator | 2026-04-09 00:43:47.207640 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:43:47.207647 | orchestrator | Thursday 09 April 2026 00:43:42 +0000 (0:00:00.135) 0:00:16.783 ******** 2026-04-09 00:43:47.207653 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:47.207661 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:43:47.207667 | orchestrator | } 2026-04-09 00:43:47.207674 | orchestrator | 2026-04-09 00:43:47.207682 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:43:47.207689 | orchestrator | Thursday 09 April 2026 00:43:42 +0000 (0:00:00.144) 0:00:16.928 ******** 2026-04-09 00:43:47.207696 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:47.207703 | orchestrator | 2026-04-09 00:43:47.207710 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:43:47.207718 | orchestrator | Thursday 09 April 2026 00:43:43 +0000 (0:00:00.684) 0:00:17.612 ******** 2026-04-09 00:43:47.207747 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:47.207755 | orchestrator | 2026-04-09 00:43:47.207762 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:43:47.207769 | orchestrator | Thursday 09 April 2026 00:43:43 +0000 (0:00:00.481) 0:00:18.093 ******** 2026-04-09 00:43:47.207776 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:47.207783 | orchestrator | 2026-04-09 00:43:47.207791 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:43:47.207798 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.541) 0:00:18.635 ******** 2026-04-09 00:43:47.207805 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:47.207812 | orchestrator | 2026-04-09 00:43:47.207818 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:43:47.207825 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.115) 0:00:18.751 ******** 2026-04-09 00:43:47.207832 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.207839 | orchestrator | 2026-04-09 00:43:47.207845 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:43:47.207851 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.097) 0:00:18.848 ******** 2026-04-09 00:43:47.207857 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.207863 | orchestrator | 2026-04-09 00:43:47.207870 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:43:47.207877 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.095) 0:00:18.944 ******** 2026-04-09 00:43:47.207884 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:47.207891 | orchestrator |  "vgs_report": { 2026-04-09 00:43:47.207898 | orchestrator |  "vg": [] 2026-04-09 00:43:47.207905 | orchestrator |  } 2026-04-09 00:43:47.207912 | orchestrator | } 2026-04-09 00:43:47.207919 | orchestrator | 2026-04-09 00:43:47.207926 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:43:47.207932 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.121) 0:00:19.065 ******** 2026-04-09 00:43:47.207939 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.207946 | orchestrator | 2026-04-09 00:43:47.207953 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:43:47.207961 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.108) 0:00:19.174 ******** 2026-04-09 00:43:47.207969 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.207976 | orchestrator | 2026-04-09 00:43:47.207984 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:43:47.207991 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.099) 0:00:19.273 ******** 2026-04-09 00:43:47.207998 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208005 | orchestrator | 2026-04-09 00:43:47.208013 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:43:47.208020 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:00.123) 0:00:19.396 ******** 2026-04-09 00:43:47.208027 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208034 | orchestrator | 2026-04-09 00:43:47.208042 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:43:47.208049 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.260) 0:00:19.657 ******** 2026-04-09 00:43:47.208057 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208064 | orchestrator | 2026-04-09 00:43:47.208071 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:43:47.208079 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.102) 0:00:19.759 ******** 2026-04-09 00:43:47.208086 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208093 | orchestrator | 2026-04-09 00:43:47.208100 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:43:47.208108 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.125) 0:00:19.884 ******** 2026-04-09 00:43:47.208115 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208128 | orchestrator | 2026-04-09 00:43:47.208136 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:43:47.208143 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.117) 0:00:20.002 ******** 2026-04-09 00:43:47.208167 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208175 | orchestrator | 2026-04-09 00:43:47.208198 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:43:47.208205 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.118) 0:00:20.120 ******** 2026-04-09 00:43:47.208211 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208218 | orchestrator | 2026-04-09 00:43:47.208224 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:43:47.208231 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.143) 0:00:20.263 ******** 2026-04-09 00:43:47.208238 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208245 | orchestrator | 2026-04-09 00:43:47.208252 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:43:47.208259 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.125) 0:00:20.389 ******** 2026-04-09 00:43:47.208266 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208272 | orchestrator | 2026-04-09 00:43:47.208278 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:43:47.208285 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.125) 0:00:20.514 ******** 2026-04-09 00:43:47.208291 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208298 | orchestrator | 2026-04-09 00:43:47.208305 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:43:47.208311 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.125) 0:00:20.640 ******** 2026-04-09 00:43:47.208318 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208324 | orchestrator | 2026-04-09 00:43:47.208331 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:43:47.208337 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.120) 0:00:20.760 ******** 2026-04-09 00:43:47.208344 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208350 | orchestrator | 2026-04-09 00:43:47.208360 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:43:47.208367 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.120) 0:00:20.881 ******** 2026-04-09 00:43:47.208429 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:47.208439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:47.208446 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208452 | orchestrator | 2026-04-09 00:43:47.208459 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:43:47.208466 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.118) 0:00:21.000 ******** 2026-04-09 00:43:47.208472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:47.208479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:47.208486 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208492 | orchestrator | 2026-04-09 00:43:47.208499 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:43:47.208505 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.291) 0:00:21.291 ******** 2026-04-09 00:43:47.208512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:47.208527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:47.208540 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208547 | orchestrator | 2026-04-09 00:43:47.208554 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:43:47.208559 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.137) 0:00:21.429 ******** 2026-04-09 00:43:47.208565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:47.208571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:47.208577 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208583 | orchestrator | 2026-04-09 00:43:47.208589 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:43:47.208596 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.150) 0:00:21.579 ******** 2026-04-09 00:43:47.208602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:47.208609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:47.208615 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:47.208622 | orchestrator | 2026-04-09 00:43:47.208628 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:43:47.208635 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.135) 0:00:21.715 ******** 2026-04-09 00:43:47.208648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:51.986795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:51.986872 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:51.986880 | orchestrator | 2026-04-09 00:43:51.986888 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:43:51.986895 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.153) 0:00:21.869 ******** 2026-04-09 00:43:51.986900 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:51.986906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:51.986911 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:51.986916 | orchestrator | 2026-04-09 00:43:51.986921 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:43:51.986926 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.135) 0:00:22.005 ******** 2026-04-09 00:43:51.986931 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:51.986948 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:51.986953 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:51.986958 | orchestrator | 2026-04-09 00:43:51.986963 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:43:51.986968 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.147) 0:00:22.152 ******** 2026-04-09 00:43:51.986973 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:51.986978 | orchestrator | 2026-04-09 00:43:51.986999 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:43:51.987004 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.488) 0:00:22.641 ******** 2026-04-09 00:43:51.987009 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:51.987014 | orchestrator | 2026-04-09 00:43:51.987019 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:43:51.987024 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.513) 0:00:23.154 ******** 2026-04-09 00:43:51.987028 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:51.987033 | orchestrator | 2026-04-09 00:43:51.987038 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:43:51.987043 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.127) 0:00:23.282 ******** 2026-04-09 00:43:51.987048 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'vg_name': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'}) 2026-04-09 00:43:51.987054 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'vg_name': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'}) 2026-04-09 00:43:51.987059 | orchestrator | 2026-04-09 00:43:51.987065 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:43:51.987069 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.159) 0:00:23.441 ******** 2026-04-09 00:43:51.987074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:51.987079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:51.987084 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:51.987089 | orchestrator | 2026-04-09 00:43:51.987094 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:43:51.987099 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.166) 0:00:23.607 ******** 2026-04-09 00:43:51.987104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:51.987109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:51.987114 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:51.987119 | orchestrator | 2026-04-09 00:43:51.987124 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:43:51.987128 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.355) 0:00:23.963 ******** 2026-04-09 00:43:51.987133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'})  2026-04-09 00:43:51.987138 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'})  2026-04-09 00:43:51.987143 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:51.987148 | orchestrator | 2026-04-09 00:43:51.987153 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:43:51.987158 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.136) 0:00:24.100 ******** 2026-04-09 00:43:51.987173 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:51.987179 | orchestrator |  "lvm_report": { 2026-04-09 00:43:51.987184 | orchestrator |  "lv": [ 2026-04-09 00:43:51.987189 | orchestrator |  { 2026-04-09 00:43:51.987194 | orchestrator |  "lv_name": "osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc", 2026-04-09 00:43:51.987200 | orchestrator |  "vg_name": "ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc" 2026-04-09 00:43:51.987205 | orchestrator |  }, 2026-04-09 00:43:51.987214 | orchestrator |  { 2026-04-09 00:43:51.987219 | orchestrator |  "lv_name": "osd-block-d2293633-4853-52c3-92d9-c83407e5923f", 2026-04-09 00:43:51.987223 | orchestrator |  "vg_name": "ceph-d2293633-4853-52c3-92d9-c83407e5923f" 2026-04-09 00:43:51.987228 | orchestrator |  } 2026-04-09 00:43:51.987233 | orchestrator |  ], 2026-04-09 00:43:51.987238 | orchestrator |  "pv": [ 2026-04-09 00:43:51.987243 | orchestrator |  { 2026-04-09 00:43:51.987248 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:43:51.987253 | orchestrator |  "vg_name": "ceph-d2293633-4853-52c3-92d9-c83407e5923f" 2026-04-09 00:43:51.987258 | orchestrator |  }, 2026-04-09 00:43:51.987262 | orchestrator |  { 2026-04-09 00:43:51.987267 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:43:51.987272 | orchestrator |  "vg_name": "ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc" 2026-04-09 00:43:51.987277 | orchestrator |  } 2026-04-09 00:43:51.987282 | orchestrator |  ] 2026-04-09 00:43:51.987287 | orchestrator |  } 2026-04-09 00:43:51.987292 | orchestrator | } 2026-04-09 00:43:51.987297 | orchestrator | 2026-04-09 00:43:51.987302 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:43:51.987307 | orchestrator | 2026-04-09 00:43:51.987312 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:43:51.987317 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.277) 0:00:24.377 ******** 2026-04-09 00:43:51.987322 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:43:51.987327 | orchestrator | 2026-04-09 00:43:51.987332 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:43:51.987337 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.257) 0:00:24.635 ******** 2026-04-09 00:43:51.987343 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:51.987348 | orchestrator | 2026-04-09 00:43:51.987354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:51.987360 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.209) 0:00:24.844 ******** 2026-04-09 00:43:51.987366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:43:51.987371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:43:51.987377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:43:51.987398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:43:51.987404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:43:51.987410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:43:51.987415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:43:51.987421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:43:51.987426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 00:43:51.987437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:43:51.987443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:43:51.987448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:43:51.987454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:43:51.987460 | orchestrator | 2026-04-09 00:43:51.987465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:51.987471 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.360) 0:00:25.205 ******** 2026-04-09 00:43:51.987476 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.987486 | orchestrator | 2026-04-09 00:43:51.987491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:51.987497 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.173) 0:00:25.378 ******** 2026-04-09 00:43:51.987503 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.987508 | orchestrator | 2026-04-09 00:43:51.987514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:51.987519 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.184) 0:00:25.563 ******** 2026-04-09 00:43:51.987525 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.987530 | orchestrator | 2026-04-09 00:43:51.987536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:51.987542 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.174) 0:00:25.737 ******** 2026-04-09 00:43:51.987547 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.987553 | orchestrator | 2026-04-09 00:43:51.987559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:51.987564 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.451) 0:00:26.189 ******** 2026-04-09 00:43:51.987570 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.987575 | orchestrator | 2026-04-09 00:43:51.987581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:51.987586 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.188) 0:00:26.377 ******** 2026-04-09 00:43:51.987592 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.987597 | orchestrator | 2026-04-09 00:43:51.987610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.625529 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.187) 0:00:26.565 ******** 2026-04-09 00:44:01.626258 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626287 | orchestrator | 2026-04-09 00:44:01.626296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.626304 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.172) 0:00:26.738 ******** 2026-04-09 00:44:01.626310 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626317 | orchestrator | 2026-04-09 00:44:01.626323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.626329 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.194) 0:00:26.933 ******** 2026-04-09 00:44:01.626337 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65) 2026-04-09 00:44:01.626345 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65) 2026-04-09 00:44:01.626353 | orchestrator | 2026-04-09 00:44:01.626360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.626367 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.361) 0:00:27.294 ******** 2026-04-09 00:44:01.626374 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299) 2026-04-09 00:44:01.626382 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299) 2026-04-09 00:44:01.626389 | orchestrator | 2026-04-09 00:44:01.626468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.626476 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.409) 0:00:27.703 ******** 2026-04-09 00:44:01.626483 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb) 2026-04-09 00:44:01.626488 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb) 2026-04-09 00:44:01.626495 | orchestrator | 2026-04-09 00:44:01.626501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.626508 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.435) 0:00:28.139 ******** 2026-04-09 00:44:01.626515 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965) 2026-04-09 00:44:01.626541 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965) 2026-04-09 00:44:01.626546 | orchestrator | 2026-04-09 00:44:01.626550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.626554 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.400) 0:00:28.540 ******** 2026-04-09 00:44:01.626558 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:44:01.626562 | orchestrator | 2026-04-09 00:44:01.626566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626570 | orchestrator | Thursday 09 April 2026 00:43:54 +0000 (0:00:00.350) 0:00:28.891 ******** 2026-04-09 00:44:01.626574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:44:01.626579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:44:01.626583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:44:01.626587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:44:01.626591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:44:01.626595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:44:01.626599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:44:01.626603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:44:01.626607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 00:44:01.626611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:44:01.626615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:44:01.626619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:44:01.626622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:44:01.626626 | orchestrator | 2026-04-09 00:44:01.626630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626634 | orchestrator | Thursday 09 April 2026 00:43:54 +0000 (0:00:00.530) 0:00:29.421 ******** 2026-04-09 00:44:01.626638 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626642 | orchestrator | 2026-04-09 00:44:01.626646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626650 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.172) 0:00:29.594 ******** 2026-04-09 00:44:01.626654 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626658 | orchestrator | 2026-04-09 00:44:01.626662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626666 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.182) 0:00:29.777 ******** 2026-04-09 00:44:01.626670 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626674 | orchestrator | 2026-04-09 00:44:01.626695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626699 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.185) 0:00:29.962 ******** 2026-04-09 00:44:01.626703 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626707 | orchestrator | 2026-04-09 00:44:01.626711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626715 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.169) 0:00:30.132 ******** 2026-04-09 00:44:01.626719 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626723 | orchestrator | 2026-04-09 00:44:01.626727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626735 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.187) 0:00:30.319 ******** 2026-04-09 00:44:01.626739 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626742 | orchestrator | 2026-04-09 00:44:01.626746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626751 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.182) 0:00:30.502 ******** 2026-04-09 00:44:01.626755 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626759 | orchestrator | 2026-04-09 00:44:01.626763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626767 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:00.188) 0:00:30.691 ******** 2026-04-09 00:44:01.626771 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626775 | orchestrator | 2026-04-09 00:44:01.626778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626786 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:00.184) 0:00:30.875 ******** 2026-04-09 00:44:01.626790 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 00:44:01.626794 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 00:44:01.626799 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 00:44:01.626803 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 00:44:01.626807 | orchestrator | 2026-04-09 00:44:01.626811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626815 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:00.707) 0:00:31.582 ******** 2026-04-09 00:44:01.626819 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626823 | orchestrator | 2026-04-09 00:44:01.626827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626830 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.180) 0:00:31.763 ******** 2026-04-09 00:44:01.626834 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626838 | orchestrator | 2026-04-09 00:44:01.626842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626846 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.183) 0:00:31.946 ******** 2026-04-09 00:44:01.626850 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626854 | orchestrator | 2026-04-09 00:44:01.626858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.626862 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.528) 0:00:32.475 ******** 2026-04-09 00:44:01.626866 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626870 | orchestrator | 2026-04-09 00:44:01.626873 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:44:01.626877 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.180) 0:00:32.655 ******** 2026-04-09 00:44:01.626881 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626885 | orchestrator | 2026-04-09 00:44:01.626889 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:44:01.626893 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.137) 0:00:32.793 ******** 2026-04-09 00:44:01.626897 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd534d538-4d4e-5604-9605-85867297f7ab'}}) 2026-04-09 00:44:01.626901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6327354e-b41f-514e-b570-068bfc1f3295'}}) 2026-04-09 00:44:01.626905 | orchestrator | 2026-04-09 00:44:01.626909 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:44:01.626913 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.175) 0:00:32.969 ******** 2026-04-09 00:44:01.626918 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'}) 2026-04-09 00:44:01.626924 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'}) 2026-04-09 00:44:01.626931 | orchestrator | 2026-04-09 00:44:01.626935 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:44:01.626939 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:01.884) 0:00:34.853 ******** 2026-04-09 00:44:01.626943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:01.626948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:01.626952 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.626956 | orchestrator | 2026-04-09 00:44:01.626960 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:44:01.626964 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:00.134) 0:00:34.988 ******** 2026-04-09 00:44:01.626968 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'}) 2026-04-09 00:44:01.626975 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'}) 2026-04-09 00:44:06.441161 | orchestrator | 2026-04-09 00:44:06.441263 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:44:06.441281 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:01.303) 0:00:36.292 ******** 2026-04-09 00:44:06.441294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:06.441308 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:06.441319 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441332 | orchestrator | 2026-04-09 00:44:06.441343 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:44:06.441354 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:00.131) 0:00:36.424 ******** 2026-04-09 00:44:06.441365 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441377 | orchestrator | 2026-04-09 00:44:06.441388 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:44:06.441525 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:00.109) 0:00:36.533 ******** 2026-04-09 00:44:06.441549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:06.441568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:06.441580 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441591 | orchestrator | 2026-04-09 00:44:06.441602 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:44:06.441613 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.148) 0:00:36.682 ******** 2026-04-09 00:44:06.441625 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441636 | orchestrator | 2026-04-09 00:44:06.441647 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:44:06.441658 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.111) 0:00:36.794 ******** 2026-04-09 00:44:06.441669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:06.441680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:06.441719 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441732 | orchestrator | 2026-04-09 00:44:06.441747 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:44:06.441760 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.120) 0:00:36.914 ******** 2026-04-09 00:44:06.441772 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441786 | orchestrator | 2026-04-09 00:44:06.441816 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:44:06.441830 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.269) 0:00:37.184 ******** 2026-04-09 00:44:06.441844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:06.441857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:06.441870 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441883 | orchestrator | 2026-04-09 00:44:06.441894 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:44:06.441905 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.130) 0:00:37.314 ******** 2026-04-09 00:44:06.441916 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:06.441928 | orchestrator | 2026-04-09 00:44:06.441939 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:44:06.441950 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.106) 0:00:37.421 ******** 2026-04-09 00:44:06.441961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:06.441972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:06.441983 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.441994 | orchestrator | 2026-04-09 00:44:06.442005 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:44:06.442073 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.119) 0:00:37.541 ******** 2026-04-09 00:44:06.442086 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:06.442097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:06.442108 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442119 | orchestrator | 2026-04-09 00:44:06.442130 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:44:06.442162 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.123) 0:00:37.664 ******** 2026-04-09 00:44:06.442174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:06.442186 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:06.442197 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442207 | orchestrator | 2026-04-09 00:44:06.442218 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:44:06.442229 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.120) 0:00:37.784 ******** 2026-04-09 00:44:06.442240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442251 | orchestrator | 2026-04-09 00:44:06.442262 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:44:06.442273 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.101) 0:00:37.885 ******** 2026-04-09 00:44:06.442293 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442304 | orchestrator | 2026-04-09 00:44:06.442315 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:44:06.442331 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.107) 0:00:37.993 ******** 2026-04-09 00:44:06.442343 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442354 | orchestrator | 2026-04-09 00:44:06.442365 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:44:06.442376 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.117) 0:00:38.110 ******** 2026-04-09 00:44:06.442387 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:44:06.442419 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:44:06.442430 | orchestrator | } 2026-04-09 00:44:06.442442 | orchestrator | 2026-04-09 00:44:06.442453 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:44:06.442464 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.118) 0:00:38.228 ******** 2026-04-09 00:44:06.442475 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:44:06.442486 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:44:06.442497 | orchestrator | } 2026-04-09 00:44:06.442509 | orchestrator | 2026-04-09 00:44:06.442520 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:44:06.442531 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.107) 0:00:38.336 ******** 2026-04-09 00:44:06.442542 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:44:06.442554 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:44:06.442565 | orchestrator | } 2026-04-09 00:44:06.442576 | orchestrator | 2026-04-09 00:44:06.442587 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:44:06.442598 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.108) 0:00:38.444 ******** 2026-04-09 00:44:06.442609 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:06.442620 | orchestrator | 2026-04-09 00:44:06.442631 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:44:06.442642 | orchestrator | Thursday 09 April 2026 00:44:04 +0000 (0:00:00.675) 0:00:39.119 ******** 2026-04-09 00:44:06.442653 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:06.442665 | orchestrator | 2026-04-09 00:44:06.442676 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:44:06.442687 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:00.503) 0:00:39.623 ******** 2026-04-09 00:44:06.442698 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:06.442709 | orchestrator | 2026-04-09 00:44:06.442720 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:44:06.442730 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:00.537) 0:00:40.160 ******** 2026-04-09 00:44:06.442741 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:06.442753 | orchestrator | 2026-04-09 00:44:06.442764 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:44:06.442775 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:00.113) 0:00:40.274 ******** 2026-04-09 00:44:06.442786 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442797 | orchestrator | 2026-04-09 00:44:06.442808 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:44:06.442819 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:00.088) 0:00:40.363 ******** 2026-04-09 00:44:06.442830 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442841 | orchestrator | 2026-04-09 00:44:06.442852 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:44:06.442863 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:00.081) 0:00:40.445 ******** 2026-04-09 00:44:06.442874 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:44:06.442886 | orchestrator |  "vgs_report": { 2026-04-09 00:44:06.442897 | orchestrator |  "vg": [] 2026-04-09 00:44:06.442908 | orchestrator |  } 2026-04-09 00:44:06.442919 | orchestrator | } 2026-04-09 00:44:06.442938 | orchestrator | 2026-04-09 00:44:06.442949 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:44:06.442960 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:00.125) 0:00:40.570 ******** 2026-04-09 00:44:06.442971 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.442982 | orchestrator | 2026-04-09 00:44:06.442993 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:44:06.443004 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.109) 0:00:40.679 ******** 2026-04-09 00:44:06.443015 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.443026 | orchestrator | 2026-04-09 00:44:06.443037 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:44:06.443048 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.108) 0:00:40.788 ******** 2026-04-09 00:44:06.443059 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.443070 | orchestrator | 2026-04-09 00:44:06.443081 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:44:06.443092 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.125) 0:00:40.914 ******** 2026-04-09 00:44:06.443104 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:06.443115 | orchestrator | 2026-04-09 00:44:06.443139 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:44:10.677483 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.109) 0:00:41.023 ******** 2026-04-09 00:44:10.677556 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677563 | orchestrator | 2026-04-09 00:44:10.677568 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:44:10.677573 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.120) 0:00:41.144 ******** 2026-04-09 00:44:10.677577 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677581 | orchestrator | 2026-04-09 00:44:10.677585 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:44:10.677589 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.260) 0:00:41.404 ******** 2026-04-09 00:44:10.677593 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677597 | orchestrator | 2026-04-09 00:44:10.677601 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:44:10.677605 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.120) 0:00:41.525 ******** 2026-04-09 00:44:10.677609 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677613 | orchestrator | 2026-04-09 00:44:10.677616 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:44:10.677620 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.122) 0:00:41.647 ******** 2026-04-09 00:44:10.677635 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677639 | orchestrator | 2026-04-09 00:44:10.677643 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:44:10.677646 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.123) 0:00:41.770 ******** 2026-04-09 00:44:10.677650 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677654 | orchestrator | 2026-04-09 00:44:10.677658 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:44:10.677662 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.117) 0:00:41.888 ******** 2026-04-09 00:44:10.677665 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677669 | orchestrator | 2026-04-09 00:44:10.677673 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:44:10.677677 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.136) 0:00:42.025 ******** 2026-04-09 00:44:10.677681 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677685 | orchestrator | 2026-04-09 00:44:10.677688 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:44:10.677692 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.122) 0:00:42.147 ******** 2026-04-09 00:44:10.677696 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677715 | orchestrator | 2026-04-09 00:44:10.677719 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:44:10.677723 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.125) 0:00:42.273 ******** 2026-04-09 00:44:10.677727 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677730 | orchestrator | 2026-04-09 00:44:10.677734 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:44:10.677738 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.130) 0:00:42.403 ******** 2026-04-09 00:44:10.677743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677757 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677763 | orchestrator | 2026-04-09 00:44:10.677768 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:44:10.677777 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.161) 0:00:42.565 ******** 2026-04-09 00:44:10.677784 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677796 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677802 | orchestrator | 2026-04-09 00:44:10.677808 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:44:10.677814 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.159) 0:00:42.724 ******** 2026-04-09 00:44:10.677819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677832 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677837 | orchestrator | 2026-04-09 00:44:10.677843 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:44:10.677849 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.148) 0:00:42.872 ******** 2026-04-09 00:44:10.677855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677862 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677869 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677874 | orchestrator | 2026-04-09 00:44:10.677889 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:44:10.677893 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.287) 0:00:43.159 ******** 2026-04-09 00:44:10.677897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677905 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677909 | orchestrator | 2026-04-09 00:44:10.677912 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:44:10.677916 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.140) 0:00:43.299 ******** 2026-04-09 00:44:10.677926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677934 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677938 | orchestrator | 2026-04-09 00:44:10.677942 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:44:10.677946 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.140) 0:00:43.440 ******** 2026-04-09 00:44:10.677950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677954 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677957 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677961 | orchestrator | 2026-04-09 00:44:10.677965 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:44:10.677969 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.136) 0:00:43.576 ******** 2026-04-09 00:44:10.677973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.677976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.677980 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.677984 | orchestrator | 2026-04-09 00:44:10.677988 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:44:10.677992 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.149) 0:00:43.726 ******** 2026-04-09 00:44:10.677995 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:10.678000 | orchestrator | 2026-04-09 00:44:10.678003 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:44:10.678007 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.516) 0:00:44.242 ******** 2026-04-09 00:44:10.678047 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:10.678052 | orchestrator | 2026-04-09 00:44:10.678057 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:44:10.678061 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.528) 0:00:44.771 ******** 2026-04-09 00:44:10.678065 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:10.678070 | orchestrator | 2026-04-09 00:44:10.678075 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:44:10.678079 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.126) 0:00:44.897 ******** 2026-04-09 00:44:10.678083 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'vg_name': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'}) 2026-04-09 00:44:10.678089 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'vg_name': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'}) 2026-04-09 00:44:10.678094 | orchestrator | 2026-04-09 00:44:10.678099 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:44:10.678103 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.152) 0:00:45.049 ******** 2026-04-09 00:44:10.678107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.678146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:10.678151 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:10.678161 | orchestrator | 2026-04-09 00:44:10.678165 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:44:10.678169 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.143) 0:00:45.193 ******** 2026-04-09 00:44:10.678174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:10.678182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:15.941155 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:15.941265 | orchestrator | 2026-04-09 00:44:15.941282 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:44:15.941296 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.136) 0:00:45.330 ******** 2026-04-09 00:44:15.941306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'})  2026-04-09 00:44:15.941319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'})  2026-04-09 00:44:15.941329 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:15.941339 | orchestrator | 2026-04-09 00:44:15.941350 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:44:15.941359 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.121) 0:00:45.451 ******** 2026-04-09 00:44:15.941369 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:44:15.941380 | orchestrator |  "lvm_report": { 2026-04-09 00:44:15.941387 | orchestrator |  "lv": [ 2026-04-09 00:44:15.941394 | orchestrator |  { 2026-04-09 00:44:15.941462 | orchestrator |  "lv_name": "osd-block-6327354e-b41f-514e-b570-068bfc1f3295", 2026-04-09 00:44:15.941473 | orchestrator |  "vg_name": "ceph-6327354e-b41f-514e-b570-068bfc1f3295" 2026-04-09 00:44:15.941480 | orchestrator |  }, 2026-04-09 00:44:15.941486 | orchestrator |  { 2026-04-09 00:44:15.941492 | orchestrator |  "lv_name": "osd-block-d534d538-4d4e-5604-9605-85867297f7ab", 2026-04-09 00:44:15.941499 | orchestrator |  "vg_name": "ceph-d534d538-4d4e-5604-9605-85867297f7ab" 2026-04-09 00:44:15.941505 | orchestrator |  } 2026-04-09 00:44:15.941512 | orchestrator |  ], 2026-04-09 00:44:15.941518 | orchestrator |  "pv": [ 2026-04-09 00:44:15.941524 | orchestrator |  { 2026-04-09 00:44:15.941531 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:44:15.941537 | orchestrator |  "vg_name": "ceph-d534d538-4d4e-5604-9605-85867297f7ab" 2026-04-09 00:44:15.941544 | orchestrator |  }, 2026-04-09 00:44:15.941550 | orchestrator |  { 2026-04-09 00:44:15.941556 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:44:15.941562 | orchestrator |  "vg_name": "ceph-6327354e-b41f-514e-b570-068bfc1f3295" 2026-04-09 00:44:15.941569 | orchestrator |  } 2026-04-09 00:44:15.941576 | orchestrator |  ] 2026-04-09 00:44:15.941582 | orchestrator |  } 2026-04-09 00:44:15.941589 | orchestrator | } 2026-04-09 00:44:15.941595 | orchestrator | 2026-04-09 00:44:15.941602 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:44:15.941608 | orchestrator | 2026-04-09 00:44:15.941614 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:44:15.941620 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:00.385) 0:00:45.836 ******** 2026-04-09 00:44:15.941627 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:44:15.941633 | orchestrator | 2026-04-09 00:44:15.941640 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:44:15.941646 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:00.217) 0:00:46.054 ******** 2026-04-09 00:44:15.941669 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:15.941675 | orchestrator | 2026-04-09 00:44:15.941682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.941689 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:00.208) 0:00:46.262 ******** 2026-04-09 00:44:15.941698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:44:15.941710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:44:15.941721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:44:15.941737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:44:15.941748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:44:15.941759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:44:15.941770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:44:15.941780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:44:15.941791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 00:44:15.941803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:44:15.941814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:44:15.941826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:44:15.941838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:44:15.941849 | orchestrator | 2026-04-09 00:44:15.941859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.941871 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.373) 0:00:46.635 ******** 2026-04-09 00:44:15.941883 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.941895 | orchestrator | 2026-04-09 00:44:15.941907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.941917 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.205) 0:00:46.841 ******** 2026-04-09 00:44:15.941928 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.941938 | orchestrator | 2026-04-09 00:44:15.941957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.941988 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.180) 0:00:47.021 ******** 2026-04-09 00:44:15.941997 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.942007 | orchestrator | 2026-04-09 00:44:15.942079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942090 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.199) 0:00:47.220 ******** 2026-04-09 00:44:15.942099 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.942108 | orchestrator | 2026-04-09 00:44:15.942118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942128 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.177) 0:00:47.398 ******** 2026-04-09 00:44:15.942139 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.942148 | orchestrator | 2026-04-09 00:44:15.942155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942161 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.201) 0:00:47.599 ******** 2026-04-09 00:44:15.942168 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.942174 | orchestrator | 2026-04-09 00:44:15.942180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942193 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.465) 0:00:48.065 ******** 2026-04-09 00:44:15.942199 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.942214 | orchestrator | 2026-04-09 00:44:15.942220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942227 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.168) 0:00:48.233 ******** 2026-04-09 00:44:15.942233 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:15.942239 | orchestrator | 2026-04-09 00:44:15.942245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942251 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.169) 0:00:48.403 ******** 2026-04-09 00:44:15.942257 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168) 2026-04-09 00:44:15.942264 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168) 2026-04-09 00:44:15.942270 | orchestrator | 2026-04-09 00:44:15.942276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942283 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.389) 0:00:48.792 ******** 2026-04-09 00:44:15.942289 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2) 2026-04-09 00:44:15.942295 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2) 2026-04-09 00:44:15.942301 | orchestrator | 2026-04-09 00:44:15.942307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942313 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.375) 0:00:49.168 ******** 2026-04-09 00:44:15.942319 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f) 2026-04-09 00:44:15.942325 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f) 2026-04-09 00:44:15.942331 | orchestrator | 2026-04-09 00:44:15.942337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942343 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.372) 0:00:49.540 ******** 2026-04-09 00:44:15.942350 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669) 2026-04-09 00:44:15.942356 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669) 2026-04-09 00:44:15.942362 | orchestrator | 2026-04-09 00:44:15.942368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:15.942374 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.388) 0:00:49.929 ******** 2026-04-09 00:44:15.942380 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:44:15.942387 | orchestrator | 2026-04-09 00:44:15.942393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:15.942399 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.297) 0:00:50.226 ******** 2026-04-09 00:44:15.942405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:44:15.942509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:44:15.942517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:44:15.942524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:44:15.942530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:44:15.942536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:44:15.942542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:44:15.942548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:44:15.942554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 00:44:15.942566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:44:15.942572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:44:15.942587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:44:24.031274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:44:24.031831 | orchestrator | 2026-04-09 00:44:24.031852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.031859 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:00.373) 0:00:50.600 ******** 2026-04-09 00:44:24.031866 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.031873 | orchestrator | 2026-04-09 00:44:24.031880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.031886 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:00.180) 0:00:50.780 ******** 2026-04-09 00:44:24.031892 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.031898 | orchestrator | 2026-04-09 00:44:24.031905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.031911 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:00.178) 0:00:50.959 ******** 2026-04-09 00:44:24.031917 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.031923 | orchestrator | 2026-04-09 00:44:24.031929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.031944 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:00.482) 0:00:51.441 ******** 2026-04-09 00:44:24.031951 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.031957 | orchestrator | 2026-04-09 00:44:24.031963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.031970 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.198) 0:00:51.640 ******** 2026-04-09 00:44:24.031977 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.031984 | orchestrator | 2026-04-09 00:44:24.031990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.031996 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.178) 0:00:51.819 ******** 2026-04-09 00:44:24.032003 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032010 | orchestrator | 2026-04-09 00:44:24.032017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.032023 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.182) 0:00:52.001 ******** 2026-04-09 00:44:24.032030 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032037 | orchestrator | 2026-04-09 00:44:24.032045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.032052 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.188) 0:00:52.190 ******** 2026-04-09 00:44:24.032059 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032066 | orchestrator | 2026-04-09 00:44:24.032073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.032080 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.179) 0:00:52.370 ******** 2026-04-09 00:44:24.032087 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 00:44:24.032095 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 00:44:24.032102 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 00:44:24.032109 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 00:44:24.032116 | orchestrator | 2026-04-09 00:44:24.032123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.032130 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:00.565) 0:00:52.935 ******** 2026-04-09 00:44:24.032137 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032144 | orchestrator | 2026-04-09 00:44:24.032151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.032171 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:00.179) 0:00:53.115 ******** 2026-04-09 00:44:24.032178 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032185 | orchestrator | 2026-04-09 00:44:24.032193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.032200 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:00.174) 0:00:53.290 ******** 2026-04-09 00:44:24.032207 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032213 | orchestrator | 2026-04-09 00:44:24.032221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:24.032228 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:00.176) 0:00:53.467 ******** 2026-04-09 00:44:24.032235 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032242 | orchestrator | 2026-04-09 00:44:24.032249 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:44:24.032256 | orchestrator | Thursday 09 April 2026 00:44:19 +0000 (0:00:00.188) 0:00:53.655 ******** 2026-04-09 00:44:24.032263 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032270 | orchestrator | 2026-04-09 00:44:24.032277 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:44:24.032284 | orchestrator | Thursday 09 April 2026 00:44:19 +0000 (0:00:00.105) 0:00:53.761 ******** 2026-04-09 00:44:24.032291 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a254e30f-06f2-55f8-8a7e-64e382968b4c'}}) 2026-04-09 00:44:24.032298 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a6a3488f-30e9-5ba3-9724-16c1df88c443'}}) 2026-04-09 00:44:24.032305 | orchestrator | 2026-04-09 00:44:24.032312 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:44:24.032319 | orchestrator | Thursday 09 April 2026 00:44:19 +0000 (0:00:00.310) 0:00:54.071 ******** 2026-04-09 00:44:24.032327 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'}) 2026-04-09 00:44:24.032335 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'}) 2026-04-09 00:44:24.032342 | orchestrator | 2026-04-09 00:44:24.032349 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:44:24.032368 | orchestrator | Thursday 09 April 2026 00:44:21 +0000 (0:00:02.001) 0:00:56.073 ******** 2026-04-09 00:44:24.032376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:24.032384 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:24.032391 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032398 | orchestrator | 2026-04-09 00:44:24.032405 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:44:24.032412 | orchestrator | Thursday 09 April 2026 00:44:21 +0000 (0:00:00.136) 0:00:56.210 ******** 2026-04-09 00:44:24.032443 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'}) 2026-04-09 00:44:24.032451 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'}) 2026-04-09 00:44:24.032458 | orchestrator | 2026-04-09 00:44:24.032465 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:44:24.032472 | orchestrator | Thursday 09 April 2026 00:44:22 +0000 (0:00:01.372) 0:00:57.582 ******** 2026-04-09 00:44:24.032479 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:24.032491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:24.032498 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032505 | orchestrator | 2026-04-09 00:44:24.032512 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:44:24.032519 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.129) 0:00:57.712 ******** 2026-04-09 00:44:24.032526 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032533 | orchestrator | 2026-04-09 00:44:24.032540 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:44:24.032547 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.118) 0:00:57.830 ******** 2026-04-09 00:44:24.032554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:24.032561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:24.032568 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032575 | orchestrator | 2026-04-09 00:44:24.032582 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:44:24.032589 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.134) 0:00:57.965 ******** 2026-04-09 00:44:24.032596 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032603 | orchestrator | 2026-04-09 00:44:24.032610 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:44:24.032622 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.126) 0:00:58.091 ******** 2026-04-09 00:44:24.032630 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:24.032637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:24.032645 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032652 | orchestrator | 2026-04-09 00:44:24.032659 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:44:24.032666 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.124) 0:00:58.216 ******** 2026-04-09 00:44:24.032673 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032680 | orchestrator | 2026-04-09 00:44:24.032687 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:44:24.032694 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.110) 0:00:58.326 ******** 2026-04-09 00:44:24.032702 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:24.032709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:24.032716 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:24.032723 | orchestrator | 2026-04-09 00:44:24.032730 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:44:24.032737 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.118) 0:00:58.445 ******** 2026-04-09 00:44:24.032744 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:24.032751 | orchestrator | 2026-04-09 00:44:24.032759 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:44:24.032766 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:00.121) 0:00:58.566 ******** 2026-04-09 00:44:24.032777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:29.463641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:29.463735 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.463748 | orchestrator | 2026-04-09 00:44:29.463757 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:44:29.463767 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:00.298) 0:00:58.864 ******** 2026-04-09 00:44:29.463775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:29.463784 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:29.463791 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.463799 | orchestrator | 2026-04-09 00:44:29.463821 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:44:29.463829 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:00.136) 0:00:59.001 ******** 2026-04-09 00:44:29.463837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:29.463845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:29.463852 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.463860 | orchestrator | 2026-04-09 00:44:29.463868 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:44:29.463875 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:00.126) 0:00:59.127 ******** 2026-04-09 00:44:29.463882 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.463889 | orchestrator | 2026-04-09 00:44:29.463897 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:44:29.463905 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:00.115) 0:00:59.243 ******** 2026-04-09 00:44:29.463912 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.463919 | orchestrator | 2026-04-09 00:44:29.463926 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:44:29.463932 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:00.105) 0:00:59.348 ******** 2026-04-09 00:44:29.463940 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.463947 | orchestrator | 2026-04-09 00:44:29.463954 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:44:29.463960 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:00.116) 0:00:59.465 ******** 2026-04-09 00:44:29.463967 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:29.463974 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:44:29.463982 | orchestrator | } 2026-04-09 00:44:29.463989 | orchestrator | 2026-04-09 00:44:29.463996 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:44:29.464003 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:00.121) 0:00:59.587 ******** 2026-04-09 00:44:29.464010 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:29.464017 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:44:29.464024 | orchestrator | } 2026-04-09 00:44:29.464032 | orchestrator | 2026-04-09 00:44:29.464040 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:44:29.464047 | orchestrator | Thursday 09 April 2026 00:44:25 +0000 (0:00:00.119) 0:00:59.706 ******** 2026-04-09 00:44:29.464054 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:29.464062 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:44:29.464070 | orchestrator | } 2026-04-09 00:44:29.464078 | orchestrator | 2026-04-09 00:44:29.464085 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:44:29.464092 | orchestrator | Thursday 09 April 2026 00:44:25 +0000 (0:00:00.119) 0:00:59.826 ******** 2026-04-09 00:44:29.464122 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:29.464131 | orchestrator | 2026-04-09 00:44:29.464137 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:44:29.464145 | orchestrator | Thursday 09 April 2026 00:44:25 +0000 (0:00:00.459) 0:01:00.286 ******** 2026-04-09 00:44:29.464152 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:29.464160 | orchestrator | 2026-04-09 00:44:29.464168 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:44:29.464175 | orchestrator | Thursday 09 April 2026 00:44:26 +0000 (0:00:00.445) 0:01:00.731 ******** 2026-04-09 00:44:29.464184 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:29.464192 | orchestrator | 2026-04-09 00:44:29.464201 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:44:29.464209 | orchestrator | Thursday 09 April 2026 00:44:26 +0000 (0:00:00.528) 0:01:01.260 ******** 2026-04-09 00:44:29.464217 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:29.464226 | orchestrator | 2026-04-09 00:44:29.464234 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:44:29.464242 | orchestrator | Thursday 09 April 2026 00:44:26 +0000 (0:00:00.254) 0:01:01.514 ******** 2026-04-09 00:44:29.464251 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464259 | orchestrator | 2026-04-09 00:44:29.464268 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:44:29.464276 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.117) 0:01:01.632 ******** 2026-04-09 00:44:29.464284 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464292 | orchestrator | 2026-04-09 00:44:29.464301 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:44:29.464309 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.093) 0:01:01.726 ******** 2026-04-09 00:44:29.464317 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:29.464326 | orchestrator |  "vgs_report": { 2026-04-09 00:44:29.464334 | orchestrator |  "vg": [] 2026-04-09 00:44:29.464359 | orchestrator |  } 2026-04-09 00:44:29.464368 | orchestrator | } 2026-04-09 00:44:29.464376 | orchestrator | 2026-04-09 00:44:29.464384 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:44:29.464393 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.145) 0:01:01.871 ******** 2026-04-09 00:44:29.464402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464410 | orchestrator | 2026-04-09 00:44:29.464418 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:44:29.464492 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.131) 0:01:02.002 ******** 2026-04-09 00:44:29.464500 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464508 | orchestrator | 2026-04-09 00:44:29.464516 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:44:29.464525 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.114) 0:01:02.117 ******** 2026-04-09 00:44:29.464533 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464542 | orchestrator | 2026-04-09 00:44:29.464550 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:44:29.464564 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.123) 0:01:02.240 ******** 2026-04-09 00:44:29.464573 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464581 | orchestrator | 2026-04-09 00:44:29.464588 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:44:29.464596 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.107) 0:01:02.347 ******** 2026-04-09 00:44:29.464604 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464612 | orchestrator | 2026-04-09 00:44:29.464620 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:44:29.464628 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:00.120) 0:01:02.468 ******** 2026-04-09 00:44:29.464636 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464652 | orchestrator | 2026-04-09 00:44:29.464660 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:44:29.464668 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:00.123) 0:01:02.592 ******** 2026-04-09 00:44:29.464676 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464685 | orchestrator | 2026-04-09 00:44:29.464693 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:44:29.464701 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:00.131) 0:01:02.723 ******** 2026-04-09 00:44:29.464708 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464716 | orchestrator | 2026-04-09 00:44:29.464724 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:44:29.464732 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:00.124) 0:01:02.848 ******** 2026-04-09 00:44:29.464740 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464748 | orchestrator | 2026-04-09 00:44:29.464756 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:44:29.464764 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:00.262) 0:01:03.110 ******** 2026-04-09 00:44:29.464772 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464780 | orchestrator | 2026-04-09 00:44:29.464788 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:44:29.464796 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:00.124) 0:01:03.235 ******** 2026-04-09 00:44:29.464804 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464812 | orchestrator | 2026-04-09 00:44:29.464820 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:44:29.464828 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:00.124) 0:01:03.360 ******** 2026-04-09 00:44:29.464836 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464844 | orchestrator | 2026-04-09 00:44:29.464852 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:44:29.464860 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:00.123) 0:01:03.484 ******** 2026-04-09 00:44:29.464868 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464876 | orchestrator | 2026-04-09 00:44:29.464884 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:44:29.464892 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.118) 0:01:03.602 ******** 2026-04-09 00:44:29.464900 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464908 | orchestrator | 2026-04-09 00:44:29.464917 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:44:29.464924 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.124) 0:01:03.726 ******** 2026-04-09 00:44:29.464934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:29.464942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:29.464950 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.464958 | orchestrator | 2026-04-09 00:44:29.464964 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:44:29.464971 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.130) 0:01:03.856 ******** 2026-04-09 00:44:29.464978 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:29.464986 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:29.464994 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:29.465002 | orchestrator | 2026-04-09 00:44:29.465010 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:44:29.465024 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.134) 0:01:03.991 ******** 2026-04-09 00:44:29.465039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.156531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.156626 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.156642 | orchestrator | 2026-04-09 00:44:32.156654 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:44:32.156665 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.126) 0:01:04.117 ******** 2026-04-09 00:44:32.156676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.156701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.156712 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.156722 | orchestrator | 2026-04-09 00:44:32.156732 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:44:32.156741 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.134) 0:01:04.252 ******** 2026-04-09 00:44:32.156751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.156761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.156771 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.156781 | orchestrator | 2026-04-09 00:44:32.156791 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:44:32.156801 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.131) 0:01:04.383 ******** 2026-04-09 00:44:32.156810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.156820 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.156830 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.156840 | orchestrator | 2026-04-09 00:44:32.156849 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:44:32.156859 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:00.126) 0:01:04.511 ******** 2026-04-09 00:44:32.156869 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.156878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.156888 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.156898 | orchestrator | 2026-04-09 00:44:32.156907 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:44:32.156917 | orchestrator | Thursday 09 April 2026 00:44:30 +0000 (0:00:00.256) 0:01:04.767 ******** 2026-04-09 00:44:32.156926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.156936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.156946 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.156978 | orchestrator | 2026-04-09 00:44:32.156989 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:44:32.156999 | orchestrator | Thursday 09 April 2026 00:44:30 +0000 (0:00:00.139) 0:01:04.907 ******** 2026-04-09 00:44:32.157008 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:32.157019 | orchestrator | 2026-04-09 00:44:32.157029 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:44:32.157038 | orchestrator | Thursday 09 April 2026 00:44:30 +0000 (0:00:00.518) 0:01:05.425 ******** 2026-04-09 00:44:32.157048 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:32.157057 | orchestrator | 2026-04-09 00:44:32.157067 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:44:32.157076 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:00.502) 0:01:05.928 ******** 2026-04-09 00:44:32.157086 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:32.157096 | orchestrator | 2026-04-09 00:44:32.157105 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:44:32.157115 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:00.126) 0:01:06.055 ******** 2026-04-09 00:44:32.157125 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'vg_name': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'}) 2026-04-09 00:44:32.157136 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'vg_name': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'}) 2026-04-09 00:44:32.157146 | orchestrator | 2026-04-09 00:44:32.157155 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:44:32.157165 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:00.146) 0:01:06.201 ******** 2026-04-09 00:44:32.157191 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.157202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.157211 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.157221 | orchestrator | 2026-04-09 00:44:32.157231 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:44:32.157240 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:00.139) 0:01:06.341 ******** 2026-04-09 00:44:32.157250 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.157260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.157270 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.157279 | orchestrator | 2026-04-09 00:44:32.157289 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:44:32.157299 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:00.134) 0:01:06.476 ******** 2026-04-09 00:44:32.157308 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'})  2026-04-09 00:44:32.157318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'})  2026-04-09 00:44:32.157328 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:32.157337 | orchestrator | 2026-04-09 00:44:32.157347 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:44:32.157357 | orchestrator | Thursday 09 April 2026 00:44:32 +0000 (0:00:00.137) 0:01:06.614 ******** 2026-04-09 00:44:32.157366 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:32.157376 | orchestrator |  "lvm_report": { 2026-04-09 00:44:32.157386 | orchestrator |  "lv": [ 2026-04-09 00:44:32.157403 | orchestrator |  { 2026-04-09 00:44:32.157413 | orchestrator |  "lv_name": "osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c", 2026-04-09 00:44:32.157423 | orchestrator |  "vg_name": "ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c" 2026-04-09 00:44:32.157451 | orchestrator |  }, 2026-04-09 00:44:32.157461 | orchestrator |  { 2026-04-09 00:44:32.157471 | orchestrator |  "lv_name": "osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443", 2026-04-09 00:44:32.157481 | orchestrator |  "vg_name": "ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443" 2026-04-09 00:44:32.157490 | orchestrator |  } 2026-04-09 00:44:32.157500 | orchestrator |  ], 2026-04-09 00:44:32.157509 | orchestrator |  "pv": [ 2026-04-09 00:44:32.157519 | orchestrator |  { 2026-04-09 00:44:32.157528 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:44:32.157538 | orchestrator |  "vg_name": "ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c" 2026-04-09 00:44:32.157547 | orchestrator |  }, 2026-04-09 00:44:32.157557 | orchestrator |  { 2026-04-09 00:44:32.157566 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:44:32.157576 | orchestrator |  "vg_name": "ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443" 2026-04-09 00:44:32.157586 | orchestrator |  } 2026-04-09 00:44:32.157596 | orchestrator |  ] 2026-04-09 00:44:32.157605 | orchestrator |  } 2026-04-09 00:44:32.157615 | orchestrator | } 2026-04-09 00:44:32.157625 | orchestrator | 2026-04-09 00:44:32.157635 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:32.157645 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:44:32.157655 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:44:32.157665 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:44:32.157675 | orchestrator | 2026-04-09 00:44:32.157684 | orchestrator | 2026-04-09 00:44:32.157694 | orchestrator | 2026-04-09 00:44:32.157710 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:32.157721 | orchestrator | Thursday 09 April 2026 00:44:32 +0000 (0:00:00.116) 0:01:06.730 ******** 2026-04-09 00:44:32.157731 | orchestrator | =============================================================================== 2026-04-09 00:44:32.157740 | orchestrator | Create block VGs -------------------------------------------------------- 5.92s 2026-04-09 00:44:32.157750 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2026-04-09 00:44:32.157760 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-04-09 00:44:32.157769 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2026-04-09 00:44:32.157779 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2026-04-09 00:44:32.157789 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.52s 2026-04-09 00:44:32.157798 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.43s 2026-04-09 00:44:32.157808 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2026-04-09 00:44:32.157824 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2026-04-09 00:44:32.393703 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2026-04-09 00:44:32.393805 | orchestrator | Print LVM report data --------------------------------------------------- 0.78s 2026-04-09 00:44:32.393820 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-04-09 00:44:32.393832 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2026-04-09 00:44:32.393843 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-09 00:44:32.393886 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.70s 2026-04-09 00:44:32.393897 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-09 00:44:32.393923 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-04-09 00:44:32.393935 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.64s 2026-04-09 00:44:32.393946 | orchestrator | Get initial list of available block devices ----------------------------- 0.63s 2026-04-09 00:44:32.393957 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.63s 2026-04-09 00:44:43.682884 | orchestrator | 2026-04-09 00:44:43 | INFO  | Prepare task for execution of facts. 2026-04-09 00:44:43.753724 | orchestrator | 2026-04-09 00:44:43 | INFO  | Task 1f95d23f-5cc2-48f6-af71-aea7db325114 (facts) was prepared for execution. 2026-04-09 00:44:43.753847 | orchestrator | 2026-04-09 00:44:43 | INFO  | It takes a moment until task 1f95d23f-5cc2-48f6-af71-aea7db325114 (facts) has been started and output is visible here. 2026-04-09 00:44:55.586922 | orchestrator | 2026-04-09 00:44:55.587013 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 00:44:55.587026 | orchestrator | 2026-04-09 00:44:55.587034 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:44:55.587042 | orchestrator | Thursday 09 April 2026 00:44:46 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-04-09 00:44:55.587050 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:55.587059 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:44:55.587066 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:44:55.587084 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:44:55.587091 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:44:55.587098 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:55.587105 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:55.587112 | orchestrator | 2026-04-09 00:44:55.587119 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:44:55.587126 | orchestrator | Thursday 09 April 2026 00:44:48 +0000 (0:00:01.333) 0:00:01.652 ******** 2026-04-09 00:44:55.587136 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:55.587149 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:55.587160 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:55.587171 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:55.587181 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:55.587194 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:55.587207 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:55.587219 | orchestrator | 2026-04-09 00:44:55.587230 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:44:55.587242 | orchestrator | 2026-04-09 00:44:55.587250 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:44:55.587257 | orchestrator | Thursday 09 April 2026 00:44:49 +0000 (0:00:01.141) 0:00:02.794 ******** 2026-04-09 00:44:55.587263 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:44:55.587270 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:44:55.587277 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:44:55.587285 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:55.587291 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:44:55.587298 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:55.587305 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:55.587311 | orchestrator | 2026-04-09 00:44:55.587318 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:44:55.587325 | orchestrator | 2026-04-09 00:44:55.587332 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:44:55.587338 | orchestrator | Thursday 09 April 2026 00:44:54 +0000 (0:00:05.289) 0:00:08.083 ******** 2026-04-09 00:44:55.587345 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:55.587352 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:55.587383 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:55.587390 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:55.587397 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:55.587404 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:55.587411 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:55.587417 | orchestrator | 2026-04-09 00:44:55.587424 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:55.587431 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:55.587484 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:55.587493 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:55.587501 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:55.587509 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:55.587517 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:55.587525 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:55.587533 | orchestrator | 2026-04-09 00:44:55.587540 | orchestrator | 2026-04-09 00:44:55.587548 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:55.587556 | orchestrator | Thursday 09 April 2026 00:44:55 +0000 (0:00:00.523) 0:00:08.606 ******** 2026-04-09 00:44:55.587564 | orchestrator | =============================================================================== 2026-04-09 00:44:55.587572 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.29s 2026-04-09 00:44:55.587579 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.33s 2026-04-09 00:44:55.587601 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.14s 2026-04-09 00:44:55.587609 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-04-09 00:45:07.099211 | orchestrator | 2026-04-09 00:45:07 | INFO  | Prepare task for execution of frr. 2026-04-09 00:45:07.167859 | orchestrator | 2026-04-09 00:45:07 | INFO  | Task d4eeabcb-837f-48c9-a69c-7eb563ffcc1a (frr) was prepared for execution. 2026-04-09 00:45:07.167932 | orchestrator | 2026-04-09 00:45:07 | INFO  | It takes a moment until task d4eeabcb-837f-48c9-a69c-7eb563ffcc1a (frr) has been started and output is visible here. 2026-04-09 00:45:29.782408 | orchestrator | 2026-04-09 00:45:29.782568 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-09 00:45:29.782581 | orchestrator | 2026-04-09 00:45:29.782589 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-09 00:45:29.782595 | orchestrator | Thursday 09 April 2026 00:45:10 +0000 (0:00:00.239) 0:00:00.239 ******** 2026-04-09 00:45:29.782602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:45:29.782609 | orchestrator | 2026-04-09 00:45:29.782615 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-09 00:45:29.782621 | orchestrator | Thursday 09 April 2026 00:45:10 +0000 (0:00:00.171) 0:00:00.411 ******** 2026-04-09 00:45:29.782627 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:29.782635 | orchestrator | 2026-04-09 00:45:29.782641 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-09 00:45:29.782664 | orchestrator | Thursday 09 April 2026 00:45:11 +0000 (0:00:01.413) 0:00:01.824 ******** 2026-04-09 00:45:29.782671 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:29.782677 | orchestrator | 2026-04-09 00:45:29.782683 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-09 00:45:29.782689 | orchestrator | Thursday 09 April 2026 00:45:20 +0000 (0:00:08.376) 0:00:10.201 ******** 2026-04-09 00:45:29.782695 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:29.782701 | orchestrator | 2026-04-09 00:45:29.782708 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-09 00:45:29.782714 | orchestrator | Thursday 09 April 2026 00:45:20 +0000 (0:00:00.908) 0:00:11.110 ******** 2026-04-09 00:45:29.782720 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:29.782725 | orchestrator | 2026-04-09 00:45:29.782731 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-09 00:45:29.782737 | orchestrator | Thursday 09 April 2026 00:45:21 +0000 (0:00:00.865) 0:00:11.976 ******** 2026-04-09 00:45:29.782743 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:29.782749 | orchestrator | 2026-04-09 00:45:29.782755 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-09 00:45:29.782761 | orchestrator | Thursday 09 April 2026 00:45:22 +0000 (0:00:01.115) 0:00:13.091 ******** 2026-04-09 00:45:29.782767 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:29.782772 | orchestrator | 2026-04-09 00:45:29.782778 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-09 00:45:29.782784 | orchestrator | Thursday 09 April 2026 00:45:23 +0000 (0:00:00.151) 0:00:13.243 ******** 2026-04-09 00:45:29.782790 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:29.782796 | orchestrator | 2026-04-09 00:45:29.782802 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-09 00:45:29.782808 | orchestrator | Thursday 09 April 2026 00:45:23 +0000 (0:00:00.223) 0:00:13.466 ******** 2026-04-09 00:45:29.782813 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:29.782819 | orchestrator | 2026-04-09 00:45:29.782825 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-09 00:45:29.782832 | orchestrator | Thursday 09 April 2026 00:45:23 +0000 (0:00:00.150) 0:00:13.617 ******** 2026-04-09 00:45:29.782837 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:29.782843 | orchestrator | 2026-04-09 00:45:29.782849 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-09 00:45:29.782855 | orchestrator | Thursday 09 April 2026 00:45:23 +0000 (0:00:00.123) 0:00:13.740 ******** 2026-04-09 00:45:29.782861 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:29.782867 | orchestrator | 2026-04-09 00:45:29.782873 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-09 00:45:29.782878 | orchestrator | Thursday 09 April 2026 00:45:23 +0000 (0:00:00.154) 0:00:13.894 ******** 2026-04-09 00:45:29.782884 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:29.782890 | orchestrator | 2026-04-09 00:45:29.782897 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-09 00:45:29.782906 | orchestrator | Thursday 09 April 2026 00:45:24 +0000 (0:00:00.929) 0:00:14.824 ******** 2026-04-09 00:45:29.782916 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-09 00:45:29.782925 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-09 00:45:29.782935 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-09 00:45:29.782944 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-09 00:45:29.782953 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-09 00:45:29.782962 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-09 00:45:29.782978 | orchestrator | 2026-04-09 00:45:29.782988 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-09 00:45:29.783011 | orchestrator | Thursday 09 April 2026 00:45:26 +0000 (0:00:02.151) 0:00:16.975 ******** 2026-04-09 00:45:29.783022 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:29.783032 | orchestrator | 2026-04-09 00:45:29.783042 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-09 00:45:29.783052 | orchestrator | Thursday 09 April 2026 00:45:28 +0000 (0:00:01.175) 0:00:18.151 ******** 2026-04-09 00:45:29.783063 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:29.783073 | orchestrator | 2026-04-09 00:45:29.783084 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:29.783094 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 00:45:29.783105 | orchestrator | 2026-04-09 00:45:29.783112 | orchestrator | 2026-04-09 00:45:29.783134 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:29.783141 | orchestrator | Thursday 09 April 2026 00:45:29 +0000 (0:00:01.396) 0:00:19.547 ******** 2026-04-09 00:45:29.783148 | orchestrator | =============================================================================== 2026-04-09 00:45:29.783156 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.38s 2026-04-09 00:45:29.783166 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.15s 2026-04-09 00:45:29.783175 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.41s 2026-04-09 00:45:29.783184 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2026-04-09 00:45:29.783194 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.18s 2026-04-09 00:45:29.783203 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.12s 2026-04-09 00:45:29.783213 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.93s 2026-04-09 00:45:29.783224 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.91s 2026-04-09 00:45:29.783233 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.87s 2026-04-09 00:45:29.783243 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.22s 2026-04-09 00:45:29.783253 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.17s 2026-04-09 00:45:29.783262 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-09 00:45:29.783268 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-04-09 00:45:29.783274 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-04-09 00:45:29.783280 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.12s 2026-04-09 00:45:29.954243 | orchestrator | 2026-04-09 00:45:29.959502 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Apr 9 00:45:29 UTC 2026 2026-04-09 00:45:29.959548 | orchestrator | 2026-04-09 00:45:30.990223 | orchestrator | 2026-04-09 00:45:30 | INFO  | Collection nutshell is prepared for execution 2026-04-09 00:45:31.090675 | orchestrator | 2026-04-09 00:45:31 | INFO  | A [0] - dotfiles 2026-04-09 00:45:41.262257 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [0] - homer 2026-04-09 00:45:41.262345 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [0] - netdata 2026-04-09 00:45:41.262354 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [0] - openstackclient 2026-04-09 00:45:41.262362 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [0] - phpmyadmin 2026-04-09 00:45:41.262369 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [0] - common 2026-04-09 00:45:41.266220 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- loadbalancer 2026-04-09 00:45:41.266279 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [2] --- opensearch 2026-04-09 00:45:41.266305 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [2] --- mariadb-ng 2026-04-09 00:45:41.266310 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [3] ---- horizon 2026-04-09 00:45:41.266449 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [3] ---- keystone 2026-04-09 00:45:41.266666 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- neutron 2026-04-09 00:45:41.266965 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [5] ------ wait-for-nova 2026-04-09 00:45:41.267127 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [6] ------- octavia 2026-04-09 00:45:41.268799 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- barbican 2026-04-09 00:45:41.268869 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- designate 2026-04-09 00:45:41.268924 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- ironic 2026-04-09 00:45:41.268933 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- placement 2026-04-09 00:45:41.268946 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- magnum 2026-04-09 00:45:41.270677 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- openvswitch 2026-04-09 00:45:41.270713 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [2] --- ovn 2026-04-09 00:45:41.271010 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- memcached 2026-04-09 00:45:41.271116 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- redis 2026-04-09 00:45:41.271172 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- rabbitmq-ng 2026-04-09 00:45:41.271562 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [0] - kubernetes 2026-04-09 00:45:41.273916 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- kubeconfig 2026-04-09 00:45:41.273949 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- copy-kubeconfig 2026-04-09 00:45:41.274239 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [0] - ceph 2026-04-09 00:45:41.276552 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [1] -- ceph-pools 2026-04-09 00:45:41.276588 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [2] --- copy-ceph-keys 2026-04-09 00:45:41.276594 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [3] ---- cephclient 2026-04-09 00:45:41.276699 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-09 00:45:41.276829 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- wait-for-keystone 2026-04-09 00:45:41.276992 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-09 00:45:41.277145 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [5] ------ glance 2026-04-09 00:45:41.277308 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [5] ------ cinder 2026-04-09 00:45:41.277399 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [5] ------ nova 2026-04-09 00:45:41.277768 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [4] ----- prometheus 2026-04-09 00:45:41.277884 | orchestrator | 2026-04-09 00:45:41 | INFO  | A [5] ------ grafana 2026-04-09 00:45:41.457973 | orchestrator | 2026-04-09 00:45:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-09 00:45:41.458138 | orchestrator | 2026-04-09 00:45:41 | INFO  | Tasks are running in the background 2026-04-09 00:45:43.176301 | orchestrator | 2026-04-09 00:45:43 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-09 00:45:45.377621 | orchestrator | 2026-04-09 00:45:45 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:45:45.377948 | orchestrator | 2026-04-09 00:45:45 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:45:45.378930 | orchestrator | 2026-04-09 00:45:45 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:45:45.379667 | orchestrator | 2026-04-09 00:45:45 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:45:45.380580 | orchestrator | 2026-04-09 00:45:45 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:45:45.381271 | orchestrator | 2026-04-09 00:45:45 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:45:45.384434 | orchestrator | 2026-04-09 00:45:45 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:45:45.384513 | orchestrator | 2026-04-09 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:48.466506 | orchestrator | 2026-04-09 00:45:48 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:45:48.469754 | orchestrator | 2026-04-09 00:45:48 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:45:48.471523 | orchestrator | 2026-04-09 00:45:48 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:45:48.472100 | orchestrator | 2026-04-09 00:45:48 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:45:48.473253 | orchestrator | 2026-04-09 00:45:48 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:45:48.473908 | orchestrator | 2026-04-09 00:45:48 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:45:48.476301 | orchestrator | 2026-04-09 00:45:48 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:45:48.476350 | orchestrator | 2026-04-09 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:51.504777 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:45:51.505106 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:45:51.505799 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:45:51.506703 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:45:51.507629 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:45:51.509502 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:45:51.510070 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:45:51.510099 | orchestrator | 2026-04-09 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:54.664924 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:45:54.665041 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:45:54.665059 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:45:54.665831 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:45:54.665870 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:45:54.666709 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:45:54.667237 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:45:54.667256 | orchestrator | 2026-04-09 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:57.712269 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:45:57.712341 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:45:57.712348 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:45:57.712370 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:45:57.714513 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:45:57.719017 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:45:57.719095 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:45:57.719108 | orchestrator | 2026-04-09 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:01.128047 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:01.128154 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:01.128168 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:01.128179 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:01.128189 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:01.128199 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:01.128209 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:46:01.128219 | orchestrator | 2026-04-09 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:03.862860 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:03.862947 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:03.862957 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:03.863607 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:03.868224 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:03.868296 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:03.870118 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:46:03.870162 | orchestrator | 2026-04-09 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:06.937354 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:06.937437 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:06.937459 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:06.937464 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:06.937468 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:06.938646 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:06.943767 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state STARTED 2026-04-09 00:46:06.943843 | orchestrator | 2026-04-09 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:10.083384 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:10.084094 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:10.084119 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:10.084720 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:10.085168 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:10.085919 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:10.086750 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:10.087132 | orchestrator | 2026-04-09 00:46:10 | INFO  | Task 3879c2ce-cfe9-426d-bfd2-2feae4e5b521 is in state SUCCESS 2026-04-09 00:46:10.087538 | orchestrator | 2026-04-09 00:46:10.087561 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-09 00:46:10.087569 | orchestrator | 2026-04-09 00:46:10.087574 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-09 00:46:10.087578 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:00.784) 0:00:00.784 ******** 2026-04-09 00:46:10.087582 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:46:10.087588 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:46:10.087592 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:46:10.087596 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:46:10.087599 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:46:10.087603 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:46:10.087607 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:10.087611 | orchestrator | 2026-04-09 00:46:10.087615 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-09 00:46:10.087619 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:06.172) 0:00:06.956 ******** 2026-04-09 00:46:10.087624 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:46:10.087628 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:46:10.087632 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:46:10.087636 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:46:10.087639 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:46:10.087643 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:46:10.087647 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:46:10.087651 | orchestrator | 2026-04-09 00:46:10.087655 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-09 00:46:10.087659 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:03.037) 0:00:09.994 ******** 2026-04-09 00:46:10.087667 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:46:00.739803', 'end': '2026-04-09 00:46:00.746891', 'delta': '0:00:00.007088', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:46:10.087699 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:59.148491', 'end': '2026-04-09 00:45:59.154532', 'delta': '0:00:00.006041', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:46:10.087703 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:59.160530', 'end': '2026-04-09 00:45:59.166174', 'delta': '0:00:00.005644', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:46:10.087723 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:59.373175', 'end': '2026-04-09 00:45:59.379686', 'delta': '0:00:00.006511', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:46:10.087729 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:59.445796', 'end': '2026-04-09 00:45:59.453750', 'delta': '0:00:00.007954', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:46:10.087744 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:59.673650', 'end': '2026-04-09 00:45:59.681325', 'delta': '0:00:00.007675', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:46:10.087757 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:59.753834', 'end': '2026-04-09 00:45:59.761365', 'delta': '0:00:00.007531', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:46:10.087764 | orchestrator | 2026-04-09 00:46:10.087769 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-09 00:46:10.087775 | orchestrator | Thursday 09 April 2026 00:46:03 +0000 (0:00:02.317) 0:00:12.312 ******** 2026-04-09 00:46:10.087781 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:46:10.087787 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:46:10.087793 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:46:10.087798 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:46:10.087804 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:46:10.087810 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:46:10.087816 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:46:10.087822 | orchestrator | 2026-04-09 00:46:10.087828 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-09 00:46:10.087834 | orchestrator | Thursday 09 April 2026 00:46:05 +0000 (0:00:01.478) 0:00:13.790 ******** 2026-04-09 00:46:10.087840 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:46:10.087846 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:46:10.087852 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:46:10.087858 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:46:10.087864 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:46:10.087869 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:46:10.087876 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:46:10.087882 | orchestrator | 2026-04-09 00:46:10.087888 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:46:10.087899 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:10.087905 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:10.087909 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:10.087917 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:10.087921 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:10.087924 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:10.087928 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:10.087932 | orchestrator | 2026-04-09 00:46:10.087936 | orchestrator | 2026-04-09 00:46:10.087940 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:46:10.087944 | orchestrator | Thursday 09 April 2026 00:46:07 +0000 (0:00:02.279) 0:00:16.070 ******** 2026-04-09 00:46:10.087948 | orchestrator | =============================================================================== 2026-04-09 00:46:10.087952 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 6.17s 2026-04-09 00:46:10.087956 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.04s 2026-04-09 00:46:10.087960 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.32s 2026-04-09 00:46:10.087964 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.28s 2026-04-09 00:46:10.087967 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.48s 2026-04-09 00:46:10.087971 | orchestrator | 2026-04-09 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:13.157660 | orchestrator | 2026-04-09 00:46:13 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:13.157748 | orchestrator | 2026-04-09 00:46:13 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:13.158646 | orchestrator | 2026-04-09 00:46:13 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:13.158746 | orchestrator | 2026-04-09 00:46:13 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:13.159867 | orchestrator | 2026-04-09 00:46:13 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:13.159911 | orchestrator | 2026-04-09 00:46:13 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:13.160668 | orchestrator | 2026-04-09 00:46:13 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:13.160893 | orchestrator | 2026-04-09 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:16.251999 | orchestrator | 2026-04-09 00:46:16 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:16.253063 | orchestrator | 2026-04-09 00:46:16 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:16.254900 | orchestrator | 2026-04-09 00:46:16 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:16.256285 | orchestrator | 2026-04-09 00:46:16 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:16.257421 | orchestrator | 2026-04-09 00:46:16 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:16.258173 | orchestrator | 2026-04-09 00:46:16 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:16.259623 | orchestrator | 2026-04-09 00:46:16 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:16.259677 | orchestrator | 2026-04-09 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:19.333783 | orchestrator | 2026-04-09 00:46:19 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:19.333900 | orchestrator | 2026-04-09 00:46:19 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:19.333918 | orchestrator | 2026-04-09 00:46:19 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:19.333933 | orchestrator | 2026-04-09 00:46:19 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:19.333946 | orchestrator | 2026-04-09 00:46:19 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:19.333960 | orchestrator | 2026-04-09 00:46:19 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:19.333975 | orchestrator | 2026-04-09 00:46:19 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:19.333989 | orchestrator | 2026-04-09 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:22.400805 | orchestrator | 2026-04-09 00:46:22 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:22.403212 | orchestrator | 2026-04-09 00:46:22 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:22.405624 | orchestrator | 2026-04-09 00:46:22 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:22.406170 | orchestrator | 2026-04-09 00:46:22 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:22.408964 | orchestrator | 2026-04-09 00:46:22 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:22.410627 | orchestrator | 2026-04-09 00:46:22 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:22.412653 | orchestrator | 2026-04-09 00:46:22 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:22.412707 | orchestrator | 2026-04-09 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:25.484745 | orchestrator | 2026-04-09 00:46:25 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:25.487215 | orchestrator | 2026-04-09 00:46:25 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:25.489945 | orchestrator | 2026-04-09 00:46:25 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:25.492240 | orchestrator | 2026-04-09 00:46:25 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:25.493409 | orchestrator | 2026-04-09 00:46:25 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:25.507805 | orchestrator | 2026-04-09 00:46:25 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:25.514239 | orchestrator | 2026-04-09 00:46:25 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:25.514335 | orchestrator | 2026-04-09 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:28.660947 | orchestrator | 2026-04-09 00:46:28 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:28.661032 | orchestrator | 2026-04-09 00:46:28 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:28.661044 | orchestrator | 2026-04-09 00:46:28 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:28.661054 | orchestrator | 2026-04-09 00:46:28 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state STARTED 2026-04-09 00:46:28.661090 | orchestrator | 2026-04-09 00:46:28 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:28.661099 | orchestrator | 2026-04-09 00:46:28 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:28.661107 | orchestrator | 2026-04-09 00:46:28 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:28.661115 | orchestrator | 2026-04-09 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:31.612577 | orchestrator | 2026-04-09 00:46:31 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:31.615650 | orchestrator | 2026-04-09 00:46:31 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:31.618371 | orchestrator | 2026-04-09 00:46:31 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:31.619477 | orchestrator | 2026-04-09 00:46:31 | INFO  | Task c1fc2ea5-d18d-4c95-bd14-f639c4f0e4c1 is in state SUCCESS 2026-04-09 00:46:31.622694 | orchestrator | 2026-04-09 00:46:31 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:31.628763 | orchestrator | 2026-04-09 00:46:31 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:31.631099 | orchestrator | 2026-04-09 00:46:31 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:31.631199 | orchestrator | 2026-04-09 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:34.689650 | orchestrator | 2026-04-09 00:46:34 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:34.689999 | orchestrator | 2026-04-09 00:46:34 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:34.690954 | orchestrator | 2026-04-09 00:46:34 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:34.691522 | orchestrator | 2026-04-09 00:46:34 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:34.692174 | orchestrator | 2026-04-09 00:46:34 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:34.693008 | orchestrator | 2026-04-09 00:46:34 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state STARTED 2026-04-09 00:46:34.693027 | orchestrator | 2026-04-09 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:37.726965 | orchestrator | 2026-04-09 00:46:37 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:37.727038 | orchestrator | 2026-04-09 00:46:37 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:37.727260 | orchestrator | 2026-04-09 00:46:37 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:37.734713 | orchestrator | 2026-04-09 00:46:37 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:37.736476 | orchestrator | 2026-04-09 00:46:37 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:37.742726 | orchestrator | 2026-04-09 00:46:37 | INFO  | Task 569096ec-349e-40f6-80a8-6a280e2f77ad is in state SUCCESS 2026-04-09 00:46:37.776826 | orchestrator | 2026-04-09 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:40.811448 | orchestrator | 2026-04-09 00:46:40 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:40.812355 | orchestrator | 2026-04-09 00:46:40 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:40.813447 | orchestrator | 2026-04-09 00:46:40 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:40.814136 | orchestrator | 2026-04-09 00:46:40 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:40.815358 | orchestrator | 2026-04-09 00:46:40 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:40.815391 | orchestrator | 2026-04-09 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:43.851251 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:43.851609 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:43.852400 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:43.852743 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:43.857142 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:43.857193 | orchestrator | 2026-04-09 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:46.904254 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:46.909813 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:46.914711 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:46.915978 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:46.921064 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:46.923028 | orchestrator | 2026-04-09 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:49.986973 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:49.988574 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:49.988937 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:49.989757 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:49.990629 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:49.990743 | orchestrator | 2026-04-09 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:53.079382 | orchestrator | 2026-04-09 00:46:53 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:53.079459 | orchestrator | 2026-04-09 00:46:53 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:53.079468 | orchestrator | 2026-04-09 00:46:53 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:53.079475 | orchestrator | 2026-04-09 00:46:53 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:53.079482 | orchestrator | 2026-04-09 00:46:53 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:53.079507 | orchestrator | 2026-04-09 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:56.095316 | orchestrator | 2026-04-09 00:46:56 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:56.097151 | orchestrator | 2026-04-09 00:46:56 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:56.099969 | orchestrator | 2026-04-09 00:46:56 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:56.100599 | orchestrator | 2026-04-09 00:46:56 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:56.100868 | orchestrator | 2026-04-09 00:46:56 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:56.100892 | orchestrator | 2026-04-09 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:59.138119 | orchestrator | 2026-04-09 00:46:59 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:46:59.138181 | orchestrator | 2026-04-09 00:46:59 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:46:59.139920 | orchestrator | 2026-04-09 00:46:59 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:46:59.141346 | orchestrator | 2026-04-09 00:46:59 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:46:59.144539 | orchestrator | 2026-04-09 00:46:59 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:46:59.144594 | orchestrator | 2026-04-09 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:02.194672 | orchestrator | 2026-04-09 00:47:02 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:02.194763 | orchestrator | 2026-04-09 00:47:02 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:47:02.195434 | orchestrator | 2026-04-09 00:47:02 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:02.196109 | orchestrator | 2026-04-09 00:47:02 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:02.198218 | orchestrator | 2026-04-09 00:47:02 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:47:02.198251 | orchestrator | 2026-04-09 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:05.241163 | orchestrator | 2026-04-09 00:47:05 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:05.243836 | orchestrator | 2026-04-09 00:47:05 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:47:05.245410 | orchestrator | 2026-04-09 00:47:05 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:05.247968 | orchestrator | 2026-04-09 00:47:05 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:05.250225 | orchestrator | 2026-04-09 00:47:05 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:47:05.250274 | orchestrator | 2026-04-09 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:08.287251 | orchestrator | 2026-04-09 00:47:08 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:08.288054 | orchestrator | 2026-04-09 00:47:08 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:47:08.289261 | orchestrator | 2026-04-09 00:47:08 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:08.290402 | orchestrator | 2026-04-09 00:47:08 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:08.294724 | orchestrator | 2026-04-09 00:47:08 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:47:08.294766 | orchestrator | 2026-04-09 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:11.358230 | orchestrator | 2026-04-09 00:47:11 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:11.358474 | orchestrator | 2026-04-09 00:47:11 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state STARTED 2026-04-09 00:47:11.360948 | orchestrator | 2026-04-09 00:47:11 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:11.361887 | orchestrator | 2026-04-09 00:47:11 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:11.363045 | orchestrator | 2026-04-09 00:47:11 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:47:11.363283 | orchestrator | 2026-04-09 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:14.408221 | orchestrator | 2026-04-09 00:47:14 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:14.409733 | orchestrator | 2026-04-09 00:47:14 | INFO  | Task f5eb99fe-042f-4fe2-a5c0-a1460c107aa7 is in state SUCCESS 2026-04-09 00:47:14.409987 | orchestrator | 2026-04-09 00:47:14.410009 | orchestrator | 2026-04-09 00:47:14.410046 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-09 00:47:14.410053 | orchestrator | 2026-04-09 00:47:14.410059 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-09 00:47:14.410077 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:00.636) 0:00:00.636 ******** 2026-04-09 00:47:14.410084 | orchestrator | ok: [testbed-manager] => { 2026-04-09 00:47:14.410091 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-09 00:47:14.410098 | orchestrator | } 2026-04-09 00:47:14.410104 | orchestrator | 2026-04-09 00:47:14.410110 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-09 00:47:14.410116 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:00.191) 0:00:00.827 ******** 2026-04-09 00:47:14.410121 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:14.410128 | orchestrator | 2026-04-09 00:47:14.410131 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-09 00:47:14.410135 | orchestrator | Thursday 09 April 2026 00:45:53 +0000 (0:00:02.020) 0:00:02.848 ******** 2026-04-09 00:47:14.410138 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-09 00:47:14.410142 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-09 00:47:14.410146 | orchestrator | 2026-04-09 00:47:14.410149 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-09 00:47:14.410152 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:02.551) 0:00:05.400 ******** 2026-04-09 00:47:14.410156 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410159 | orchestrator | 2026-04-09 00:47:14.410162 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-09 00:47:14.410166 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:02.366) 0:00:07.767 ******** 2026-04-09 00:47:14.410169 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410172 | orchestrator | 2026-04-09 00:47:14.410176 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-09 00:47:14.410181 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:03.329) 0:00:11.097 ******** 2026-04-09 00:47:14.410186 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-09 00:47:14.410191 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:14.410196 | orchestrator | 2026-04-09 00:47:14.410201 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-09 00:47:14.410227 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:26.003) 0:00:37.100 ******** 2026-04-09 00:47:14.410231 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410234 | orchestrator | 2026-04-09 00:47:14.410237 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:47:14.410241 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:14.410246 | orchestrator | 2026-04-09 00:47:14.410249 | orchestrator | 2026-04-09 00:47:14.410252 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:47:14.410258 | orchestrator | Thursday 09 April 2026 00:46:31 +0000 (0:00:03.088) 0:00:40.188 ******** 2026-04-09 00:47:14.410263 | orchestrator | =============================================================================== 2026-04-09 00:47:14.410279 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.00s 2026-04-09 00:47:14.410285 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.33s 2026-04-09 00:47:14.410290 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.09s 2026-04-09 00:47:14.410295 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.55s 2026-04-09 00:47:14.410300 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.37s 2026-04-09 00:47:14.410305 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.02s 2026-04-09 00:47:14.410318 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.19s 2026-04-09 00:47:14.410329 | orchestrator | 2026-04-09 00:47:14.410332 | orchestrator | 2026-04-09 00:47:14.410335 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-09 00:47:14.410339 | orchestrator | 2026-04-09 00:47:14.410342 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-09 00:47:14.410345 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:00.754) 0:00:00.754 ******** 2026-04-09 00:47:14.410348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-09 00:47:14.410352 | orchestrator | 2026-04-09 00:47:14.410356 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-09 00:47:14.410359 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:00.591) 0:00:01.348 ******** 2026-04-09 00:47:14.410362 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-09 00:47:14.410365 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-09 00:47:14.410369 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-09 00:47:14.410372 | orchestrator | 2026-04-09 00:47:14.410375 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-09 00:47:14.410378 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:03.933) 0:00:05.281 ******** 2026-04-09 00:47:14.410381 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410384 | orchestrator | 2026-04-09 00:47:14.410387 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-09 00:47:14.410391 | orchestrator | Thursday 09 April 2026 00:45:57 +0000 (0:00:01.401) 0:00:06.682 ******** 2026-04-09 00:47:14.410406 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-09 00:47:14.410412 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:14.410418 | orchestrator | 2026-04-09 00:47:14.410424 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-09 00:47:14.410429 | orchestrator | Thursday 09 April 2026 00:46:31 +0000 (0:00:33.922) 0:00:40.605 ******** 2026-04-09 00:47:14.410434 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410439 | orchestrator | 2026-04-09 00:47:14.410445 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-09 00:47:14.410448 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.735) 0:00:41.341 ******** 2026-04-09 00:47:14.410455 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:14.410458 | orchestrator | 2026-04-09 00:47:14.410461 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-09 00:47:14.410464 | orchestrator | Thursday 09 April 2026 00:46:33 +0000 (0:00:00.948) 0:00:42.289 ******** 2026-04-09 00:47:14.410467 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410470 | orchestrator | 2026-04-09 00:47:14.410473 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-09 00:47:14.410478 | orchestrator | Thursday 09 April 2026 00:46:35 +0000 (0:00:01.936) 0:00:44.226 ******** 2026-04-09 00:47:14.410485 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410525 | orchestrator | 2026-04-09 00:47:14.410531 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-09 00:47:14.410536 | orchestrator | Thursday 09 April 2026 00:46:36 +0000 (0:00:01.121) 0:00:45.347 ******** 2026-04-09 00:47:14.410541 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410546 | orchestrator | 2026-04-09 00:47:14.410551 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-09 00:47:14.410556 | orchestrator | Thursday 09 April 2026 00:46:36 +0000 (0:00:00.810) 0:00:46.158 ******** 2026-04-09 00:47:14.410561 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:14.410566 | orchestrator | 2026-04-09 00:47:14.410571 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:47:14.410576 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:14.410581 | orchestrator | 2026-04-09 00:47:14.410586 | orchestrator | 2026-04-09 00:47:14.410591 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:47:14.410596 | orchestrator | Thursday 09 April 2026 00:46:37 +0000 (0:00:00.320) 0:00:46.479 ******** 2026-04-09 00:47:14.410601 | orchestrator | =============================================================================== 2026-04-09 00:47:14.410607 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.92s 2026-04-09 00:47:14.410612 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.93s 2026-04-09 00:47:14.410618 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.94s 2026-04-09 00:47:14.410623 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.40s 2026-04-09 00:47:14.410629 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.12s 2026-04-09 00:47:14.410634 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.95s 2026-04-09 00:47:14.410640 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.81s 2026-04-09 00:47:14.410649 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.74s 2026-04-09 00:47:14.410655 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.59s 2026-04-09 00:47:14.410661 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.32s 2026-04-09 00:47:14.410666 | orchestrator | 2026-04-09 00:47:14.410672 | orchestrator | 2026-04-09 00:47:14.410677 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-09 00:47:14.410682 | orchestrator | 2026-04-09 00:47:14.410688 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-09 00:47:14.410694 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-04-09 00:47:14.410699 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:14.410705 | orchestrator | 2026-04-09 00:47:14.410710 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-09 00:47:14.410716 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:02.202) 0:00:02.433 ******** 2026-04-09 00:47:14.410721 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-09 00:47:14.410727 | orchestrator | 2026-04-09 00:47:14.410733 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-09 00:47:14.410741 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:00.732) 0:00:03.166 ******** 2026-04-09 00:47:14.410747 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410752 | orchestrator | 2026-04-09 00:47:14.410758 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-09 00:47:14.410764 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:01.189) 0:00:04.355 ******** 2026-04-09 00:47:14.410769 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-09 00:47:14.410775 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:14.410780 | orchestrator | 2026-04-09 00:47:14.410786 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-09 00:47:14.410791 | orchestrator | Thursday 09 April 2026 00:47:07 +0000 (0:00:51.656) 0:00:56.012 ******** 2026-04-09 00:47:14.410797 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:14.410802 | orchestrator | 2026-04-09 00:47:14.410808 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:47:14.410813 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:14.410819 | orchestrator | 2026-04-09 00:47:14.410824 | orchestrator | 2026-04-09 00:47:14.410830 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:47:14.410840 | orchestrator | Thursday 09 April 2026 00:47:12 +0000 (0:00:04.976) 0:01:00.989 ******** 2026-04-09 00:47:14.410846 | orchestrator | =============================================================================== 2026-04-09 00:47:14.410852 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 51.66s 2026-04-09 00:47:14.410858 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.98s 2026-04-09 00:47:14.410863 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.20s 2026-04-09 00:47:14.410868 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.19s 2026-04-09 00:47:14.410874 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.73s 2026-04-09 00:47:14.411933 | orchestrator | 2026-04-09 00:47:14 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:14.414159 | orchestrator | 2026-04-09 00:47:14 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:14.415375 | orchestrator | 2026-04-09 00:47:14 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state STARTED 2026-04-09 00:47:14.415409 | orchestrator | 2026-04-09 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:17.463346 | orchestrator | 2026-04-09 00:47:17 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:17.463419 | orchestrator | 2026-04-09 00:47:17 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:17.464063 | orchestrator | 2026-04-09 00:47:17 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:17.464783 | orchestrator | 2026-04-09 00:47:17 | INFO  | Task 9389fdfe-1b04-43a5-93ee-10abd94f0ca3 is in state SUCCESS 2026-04-09 00:47:17.464811 | orchestrator | 2026-04-09 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:17.465050 | orchestrator | 2026-04-09 00:47:17.465060 | orchestrator | 2026-04-09 00:47:17.465066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:47:17.465071 | orchestrator | 2026-04-09 00:47:17.465082 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:47:17.465088 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:00.345) 0:00:00.345 ******** 2026-04-09 00:47:17.465094 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-09 00:47:17.465119 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-09 00:47:17.465125 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-09 00:47:17.465130 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-09 00:47:17.465135 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-09 00:47:17.465140 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-09 00:47:17.465146 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-09 00:47:17.465151 | orchestrator | 2026-04-09 00:47:17.465167 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-09 00:47:17.465172 | orchestrator | 2026-04-09 00:47:17.465178 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-09 00:47:17.465183 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:01.345) 0:00:01.691 ******** 2026-04-09 00:47:17.465196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:17.465203 | orchestrator | 2026-04-09 00:47:17.465208 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-09 00:47:17.465214 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:03.012) 0:00:04.704 ******** 2026-04-09 00:47:17.465219 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:17.465225 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:17.465230 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:17.465235 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:17.465240 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:17.465245 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:17.465250 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:17.465256 | orchestrator | 2026-04-09 00:47:17.465261 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-09 00:47:17.465267 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:02.011) 0:00:06.715 ******** 2026-04-09 00:47:17.465272 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:17.465277 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:17.465282 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:17.465287 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:17.465292 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:17.465297 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:17.465302 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:17.465307 | orchestrator | 2026-04-09 00:47:17.465312 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-09 00:47:17.465317 | orchestrator | Thursday 09 April 2026 00:45:59 +0000 (0:00:02.855) 0:00:09.570 ******** 2026-04-09 00:47:17.465322 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:17.465327 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:17.465333 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:17.465338 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:17.465343 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:17.465348 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:17.465353 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:17.465358 | orchestrator | 2026-04-09 00:47:17.465364 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-09 00:47:17.465369 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:01.985) 0:00:11.555 ******** 2026-04-09 00:47:17.465374 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:17.465379 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:17.465384 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:17.465389 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:17.465394 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:17.465399 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:17.465404 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:17.465409 | orchestrator | 2026-04-09 00:47:17.465414 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-09 00:47:17.465423 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:10.484) 0:00:22.039 ******** 2026-04-09 00:47:17.465429 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:17.465434 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:17.465439 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:17.465444 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:17.465450 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:17.465455 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:17.465460 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:17.465465 | orchestrator | 2026-04-09 00:47:17.465470 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-09 00:47:17.465475 | orchestrator | Thursday 09 April 2026 00:46:48 +0000 (0:00:36.511) 0:00:58.551 ******** 2026-04-09 00:47:17.465481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:17.465487 | orchestrator | 2026-04-09 00:47:17.465538 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-09 00:47:17.465544 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:01.570) 0:01:00.121 ******** 2026-04-09 00:47:17.465549 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-09 00:47:17.465555 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-09 00:47:17.465560 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-09 00:47:17.465565 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-09 00:47:17.465579 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-09 00:47:17.465585 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-09 00:47:17.465590 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-09 00:47:17.465595 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-09 00:47:17.465601 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-09 00:47:17.465606 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-09 00:47:17.465610 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-09 00:47:17.465615 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-09 00:47:17.465620 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-09 00:47:17.465626 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-09 00:47:17.465631 | orchestrator | 2026-04-09 00:47:17.465636 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-09 00:47:17.465643 | orchestrator | Thursday 09 April 2026 00:46:53 +0000 (0:00:03.723) 0:01:03.845 ******** 2026-04-09 00:47:17.465648 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:17.465653 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:17.465658 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:17.465664 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:17.465669 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:17.465674 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:17.465679 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:17.465684 | orchestrator | 2026-04-09 00:47:17.465690 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-09 00:47:17.465695 | orchestrator | Thursday 09 April 2026 00:46:55 +0000 (0:00:01.541) 0:01:05.386 ******** 2026-04-09 00:47:17.465701 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:17.465706 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:17.465711 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:17.465716 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:17.465721 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:17.465727 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:17.465733 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:17.465738 | orchestrator | 2026-04-09 00:47:17.465746 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-09 00:47:17.465761 | orchestrator | Thursday 09 April 2026 00:46:57 +0000 (0:00:01.918) 0:01:07.304 ******** 2026-04-09 00:47:17.465771 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:17.465781 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:17.465791 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:17.465801 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:17.465811 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:17.465821 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:17.465826 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:17.465831 | orchestrator | 2026-04-09 00:47:17.465836 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-09 00:47:17.465842 | orchestrator | Thursday 09 April 2026 00:46:59 +0000 (0:00:01.919) 0:01:09.224 ******** 2026-04-09 00:47:17.465847 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:17.465852 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:17.465857 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:17.465862 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:17.465867 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:17.465873 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:17.465878 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:17.465883 | orchestrator | 2026-04-09 00:47:17.465889 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-09 00:47:17.465894 | orchestrator | Thursday 09 April 2026 00:47:01 +0000 (0:00:02.491) 0:01:11.715 ******** 2026-04-09 00:47:17.465929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-09 00:47:17.465937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:17.465943 | orchestrator | 2026-04-09 00:47:17.465949 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-09 00:47:17.465954 | orchestrator | Thursday 09 April 2026 00:47:03 +0000 (0:00:01.446) 0:01:13.162 ******** 2026-04-09 00:47:17.465960 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:17.465965 | orchestrator | 2026-04-09 00:47:17.465970 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-09 00:47:17.465976 | orchestrator | Thursday 09 April 2026 00:47:04 +0000 (0:00:01.586) 0:01:14.749 ******** 2026-04-09 00:47:17.465981 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:17.465986 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:17.465991 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:17.465996 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:17.466001 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:17.466006 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:17.466011 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:17.466055 | orchestrator | 2026-04-09 00:47:17.466061 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:47:17.466066 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:17.466073 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:17.466079 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:17.466084 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:17.466094 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:17.466100 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:17.466110 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:17.466116 | orchestrator | 2026-04-09 00:47:17.466121 | orchestrator | 2026-04-09 00:47:17.466126 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:47:17.466131 | orchestrator | Thursday 09 April 2026 00:47:15 +0000 (0:00:11.208) 0:01:25.958 ******** 2026-04-09 00:47:17.466137 | orchestrator | =============================================================================== 2026-04-09 00:47:17.466142 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 36.51s 2026-04-09 00:47:17.466147 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.21s 2026-04-09 00:47:17.466156 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.48s 2026-04-09 00:47:17.466161 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.72s 2026-04-09 00:47:17.466167 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.01s 2026-04-09 00:47:17.466172 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.86s 2026-04-09 00:47:17.466177 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.49s 2026-04-09 00:47:17.466182 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.01s 2026-04-09 00:47:17.466188 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.99s 2026-04-09 00:47:17.466193 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.92s 2026-04-09 00:47:17.466198 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.92s 2026-04-09 00:47:17.466203 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.59s 2026-04-09 00:47:17.466209 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.57s 2026-04-09 00:47:17.466215 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.54s 2026-04-09 00:47:17.466221 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.45s 2026-04-09 00:47:17.466227 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2026-04-09 00:47:20.514103 | orchestrator | 2026-04-09 00:47:20 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:20.517007 | orchestrator | 2026-04-09 00:47:20 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:20.519645 | orchestrator | 2026-04-09 00:47:20 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:20.520263 | orchestrator | 2026-04-09 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:23.560121 | orchestrator | 2026-04-09 00:47:23 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:23.561663 | orchestrator | 2026-04-09 00:47:23 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:23.563938 | orchestrator | 2026-04-09 00:47:23 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:23.564030 | orchestrator | 2026-04-09 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:26.612102 | orchestrator | 2026-04-09 00:47:26 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:26.613210 | orchestrator | 2026-04-09 00:47:26 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:26.613765 | orchestrator | 2026-04-09 00:47:26 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:26.614671 | orchestrator | 2026-04-09 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:29.666212 | orchestrator | 2026-04-09 00:47:29 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:29.667747 | orchestrator | 2026-04-09 00:47:29 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:29.670175 | orchestrator | 2026-04-09 00:47:29 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:29.670225 | orchestrator | 2026-04-09 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:32.714971 | orchestrator | 2026-04-09 00:47:32 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:32.717633 | orchestrator | 2026-04-09 00:47:32 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:32.718356 | orchestrator | 2026-04-09 00:47:32 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:32.718405 | orchestrator | 2026-04-09 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:35.796193 | orchestrator | 2026-04-09 00:47:35 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:35.800189 | orchestrator | 2026-04-09 00:47:35 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:35.802715 | orchestrator | 2026-04-09 00:47:35 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:35.802771 | orchestrator | 2026-04-09 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:38.841702 | orchestrator | 2026-04-09 00:47:38 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:38.843073 | orchestrator | 2026-04-09 00:47:38 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:38.844892 | orchestrator | 2026-04-09 00:47:38 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:38.844925 | orchestrator | 2026-04-09 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:41.891709 | orchestrator | 2026-04-09 00:47:41 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:41.897575 | orchestrator | 2026-04-09 00:47:41 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:41.899998 | orchestrator | 2026-04-09 00:47:41 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:41.900539 | orchestrator | 2026-04-09 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:44.954444 | orchestrator | 2026-04-09 00:47:44 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:44.957679 | orchestrator | 2026-04-09 00:47:44 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:44.958908 | orchestrator | 2026-04-09 00:47:44 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:44.958935 | orchestrator | 2026-04-09 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:48.007767 | orchestrator | 2026-04-09 00:47:48 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:48.010247 | orchestrator | 2026-04-09 00:47:48 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:48.010306 | orchestrator | 2026-04-09 00:47:48 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:48.010312 | orchestrator | 2026-04-09 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:51.054536 | orchestrator | 2026-04-09 00:47:51 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:51.057774 | orchestrator | 2026-04-09 00:47:51 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:51.059679 | orchestrator | 2026-04-09 00:47:51 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:51.059726 | orchestrator | 2026-04-09 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:54.129783 | orchestrator | 2026-04-09 00:47:54 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:54.131394 | orchestrator | 2026-04-09 00:47:54 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:54.131808 | orchestrator | 2026-04-09 00:47:54 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:54.131842 | orchestrator | 2026-04-09 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:57.168479 | orchestrator | 2026-04-09 00:47:57 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:47:57.171101 | orchestrator | 2026-04-09 00:47:57 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:47:57.172732 | orchestrator | 2026-04-09 00:47:57 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:47:57.172802 | orchestrator | 2026-04-09 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:00.205177 | orchestrator | 2026-04-09 00:48:00 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:00.207866 | orchestrator | 2026-04-09 00:48:00 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:00.210053 | orchestrator | 2026-04-09 00:48:00 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:00.211443 | orchestrator | 2026-04-09 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:03.252632 | orchestrator | 2026-04-09 00:48:03 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:03.254106 | orchestrator | 2026-04-09 00:48:03 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:03.255430 | orchestrator | 2026-04-09 00:48:03 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:03.255593 | orchestrator | 2026-04-09 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:06.291972 | orchestrator | 2026-04-09 00:48:06 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:06.292856 | orchestrator | 2026-04-09 00:48:06 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:06.293438 | orchestrator | 2026-04-09 00:48:06 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:06.293569 | orchestrator | 2026-04-09 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:09.338629 | orchestrator | 2026-04-09 00:48:09 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:09.340304 | orchestrator | 2026-04-09 00:48:09 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:09.342118 | orchestrator | 2026-04-09 00:48:09 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:09.342175 | orchestrator | 2026-04-09 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:12.377333 | orchestrator | 2026-04-09 00:48:12 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:12.377403 | orchestrator | 2026-04-09 00:48:12 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:12.377948 | orchestrator | 2026-04-09 00:48:12 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:12.377970 | orchestrator | 2026-04-09 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:15.418977 | orchestrator | 2026-04-09 00:48:15 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:15.419825 | orchestrator | 2026-04-09 00:48:15 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:15.420315 | orchestrator | 2026-04-09 00:48:15 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:15.420349 | orchestrator | 2026-04-09 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:18.458282 | orchestrator | 2026-04-09 00:48:18 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:18.459877 | orchestrator | 2026-04-09 00:48:18 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:18.462644 | orchestrator | 2026-04-09 00:48:18 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:18.462700 | orchestrator | 2026-04-09 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:21.502631 | orchestrator | 2026-04-09 00:48:21 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:21.504526 | orchestrator | 2026-04-09 00:48:21 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:21.506615 | orchestrator | 2026-04-09 00:48:21 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state STARTED 2026-04-09 00:48:21.507006 | orchestrator | 2026-04-09 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:24.542353 | orchestrator | 2026-04-09 00:48:24 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:24.542736 | orchestrator | 2026-04-09 00:48:24 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:24.545828 | orchestrator | 2026-04-09 00:48:24 | INFO  | Task 94bf7754-283f-4153-822e-f507a1487274 is in state SUCCESS 2026-04-09 00:48:24.547250 | orchestrator | 2026-04-09 00:48:24.547301 | orchestrator | 2026-04-09 00:48:24.547310 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-09 00:48:24.547317 | orchestrator | 2026-04-09 00:48:24.547323 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 00:48:24.547328 | orchestrator | Thursday 09 April 2026 00:45:44 +0000 (0:00:00.300) 0:00:00.300 ******** 2026-04-09 00:48:24.547334 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:48:24.547341 | orchestrator | 2026-04-09 00:48:24.547346 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-09 00:48:24.547351 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:01.456) 0:00:01.757 ******** 2026-04-09 00:48:24.547357 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:48:24.547411 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:48:24.547417 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:48:24.547435 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:48:24.547441 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:48:24.547447 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:48:24.547464 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:48:24.547472 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:48:24.547478 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:48:24.547589 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:48:24.547595 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:48:24.547615 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:48:24.547622 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:48:24.547628 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:48:24.547634 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:48:24.547640 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:48:24.547645 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:48:24.547650 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:48:24.547655 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:48:24.547661 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:48:24.547667 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:48:24.547673 | orchestrator | 2026-04-09 00:48:24.547678 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 00:48:24.547684 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:04.137) 0:00:05.894 ******** 2026-04-09 00:48:24.547690 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:48:24.547697 | orchestrator | 2026-04-09 00:48:24.547703 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-09 00:48:24.547721 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:01.371) 0:00:07.266 ******** 2026-04-09 00:48:24.547730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.547738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.547757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.547773 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.547781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.547788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.547794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.547821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547855 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547893 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.547897 | orchestrator | 2026-04-09 00:48:24.547900 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-09 00:48:24.547904 | orchestrator | Thursday 09 April 2026 00:45:57 +0000 (0:00:05.434) 0:00:12.701 ******** 2026-04-09 00:48:24.547910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.547914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.547920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.547925 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.547945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.547952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.547960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.547965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.547971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.547977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.547981 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:24.547984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.547995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.547999 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:24.548002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548008 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:24.548012 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:24.548017 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548020 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:48:24.548023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548058 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:24.548062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548065 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:24.548068 | orchestrator | 2026-04-09 00:48:24.548071 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-09 00:48:24.548075 | orchestrator | Thursday 09 April 2026 00:45:59 +0000 (0:00:02.813) 0:00:15.514 ******** 2026-04-09 00:48:24.548080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548086 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548096 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:24.548099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548111 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:24.548115 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548135 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548138 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:48:24.548141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548145 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:24.548429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548458 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:24.548464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.548474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548491 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:24.548496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.548506 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:24.548511 | orchestrator | 2026-04-09 00:48:24.548516 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-09 00:48:24.548521 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:02.580) 0:00:18.094 ******** 2026-04-09 00:48:24.548526 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:48:24.548530 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:24.548535 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:24.548540 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:24.548546 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:24.548555 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:24.548560 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:24.548565 | orchestrator | 2026-04-09 00:48:24.548570 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-09 00:48:24.548575 | orchestrator | Thursday 09 April 2026 00:46:04 +0000 (0:00:01.732) 0:00:19.827 ******** 2026-04-09 00:48:24.548580 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:48:24.548586 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:24.548591 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:24.548596 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:24.548601 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:24.548606 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:24.548611 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:24.548617 | orchestrator | 2026-04-09 00:48:24.548622 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-09 00:48:24.548627 | orchestrator | Thursday 09 April 2026 00:46:05 +0000 (0:00:01.592) 0:00:21.420 ******** 2026-04-09 00:48:24.548632 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:48:24.548637 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:24.548642 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:24.548648 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:24.548653 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:24.548658 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:24.548663 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:24.548668 | orchestrator | 2026-04-09 00:48:24.548673 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-09 00:48:24.548679 | orchestrator | Thursday 09 April 2026 00:46:06 +0000 (0:00:01.129) 0:00:22.550 ******** 2026-04-09 00:48:24.548688 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.548693 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.548698 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.548703 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.548711 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.548716 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.548722 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.548727 | orchestrator | 2026-04-09 00:48:24.548732 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-09 00:48:24.548737 | orchestrator | Thursday 09 April 2026 00:46:10 +0000 (0:00:03.291) 0:00:25.841 ******** 2026-04-09 00:48:24.548743 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.548749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.548754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.548759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.548765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.548794 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.548810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.548816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548853 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.548902 | orchestrator | 2026-04-09 00:48:24.548908 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-09 00:48:24.548916 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:05.212) 0:00:31.054 ******** 2026-04-09 00:48:24.548922 | orchestrator | [WARNING]: Skipped 2026-04-09 00:48:24.548928 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-09 00:48:24.548934 | orchestrator | to this access issue: 2026-04-09 00:48:24.548939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-09 00:48:24.548944 | orchestrator | directory 2026-04-09 00:48:24.548950 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:48:24.548955 | orchestrator | 2026-04-09 00:48:24.548961 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-09 00:48:24.548966 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:01.244) 0:00:32.298 ******** 2026-04-09 00:48:24.548971 | orchestrator | [WARNING]: Skipped 2026-04-09 00:48:24.548977 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-09 00:48:24.548982 | orchestrator | to this access issue: 2026-04-09 00:48:24.548987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-09 00:48:24.548993 | orchestrator | directory 2026-04-09 00:48:24.548998 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:48:24.549003 | orchestrator | 2026-04-09 00:48:24.549009 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-09 00:48:24.549014 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:01.340) 0:00:33.638 ******** 2026-04-09 00:48:24.549019 | orchestrator | [WARNING]: Skipped 2026-04-09 00:48:24.549027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-09 00:48:24.549032 | orchestrator | to this access issue: 2026-04-09 00:48:24.549038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-09 00:48:24.549043 | orchestrator | directory 2026-04-09 00:48:24.549048 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:48:24.549054 | orchestrator | 2026-04-09 00:48:24.549059 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-09 00:48:24.549065 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:00.790) 0:00:34.428 ******** 2026-04-09 00:48:24.549070 | orchestrator | [WARNING]: Skipped 2026-04-09 00:48:24.549076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-09 00:48:24.549081 | orchestrator | to this access issue: 2026-04-09 00:48:24.549087 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-09 00:48:24.549092 | orchestrator | directory 2026-04-09 00:48:24.549098 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:48:24.549103 | orchestrator | 2026-04-09 00:48:24.549108 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-09 00:48:24.549113 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:01.019) 0:00:35.448 ******** 2026-04-09 00:48:24.549118 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.549123 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.549128 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.549134 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.549139 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.549145 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.549150 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.549157 | orchestrator | 2026-04-09 00:48:24.549164 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-09 00:48:24.549173 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:04.538) 0:00:39.987 ******** 2026-04-09 00:48:24.549183 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:48:24.549193 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:48:24.549203 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:48:24.549217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:48:24.549228 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:48:24.549237 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:48:24.549247 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:48:24.549257 | orchestrator | 2026-04-09 00:48:24.549268 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-09 00:48:24.549278 | orchestrator | Thursday 09 April 2026 00:46:29 +0000 (0:00:04.738) 0:00:44.725 ******** 2026-04-09 00:48:24.549287 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.549297 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.549308 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.549317 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.549322 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.549327 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.549332 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.549337 | orchestrator | 2026-04-09 00:48:24.549342 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-09 00:48:24.549347 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:03.118) 0:00:47.844 ******** 2026-04-09 00:48:24.549359 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549365 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549373 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549377 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549388 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549396 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549403 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549407 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549419 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549447 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549452 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549467 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549497 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549503 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549511 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549516 | orchestrator | 2026-04-09 00:48:24.549521 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-09 00:48:24.549526 | orchestrator | Thursday 09 April 2026 00:46:34 +0000 (0:00:02.533) 0:00:50.377 ******** 2026-04-09 00:48:24.549532 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:48:24.549538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:48:24.549541 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:48:24.549545 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:48:24.549549 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:48:24.549552 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:48:24.549556 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:48:24.549559 | orchestrator | 2026-04-09 00:48:24.549564 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-09 00:48:24.549569 | orchestrator | Thursday 09 April 2026 00:46:37 +0000 (0:00:02.564) 0:00:52.942 ******** 2026-04-09 00:48:24.549574 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:48:24.549580 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:48:24.549585 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:48:24.549594 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:48:24.549599 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:48:24.549604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:48:24.549609 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:48:24.549614 | orchestrator | 2026-04-09 00:48:24.549622 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-09 00:48:24.549627 | orchestrator | Thursday 09 April 2026 00:46:39 +0000 (0:00:02.568) 0:00:55.510 ******** 2026-04-09 00:48:24.549633 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549653 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549673 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:48:24.549702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:48:24.549761 | orchestrator | 2026-04-09 00:48:24.549767 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-09 00:48:24.549772 | orchestrator | Thursday 09 April 2026 00:46:43 +0000 (0:00:03.816) 0:00:59.326 ******** 2026-04-09 00:48:24.549777 | orchestrator | changed: [testbed-manager] => { 2026-04-09 00:48:24.549783 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:24.549789 | orchestrator | } 2026-04-09 00:48:24.549794 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:48:24.549799 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:24.549805 | orchestrator | } 2026-04-09 00:48:24.549810 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:48:24.549815 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:24.549819 | orchestrator | } 2026-04-09 00:48:24.549823 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:48:24.549826 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:24.549830 | orchestrator | } 2026-04-09 00:48:24.549833 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:48:24.549837 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:24.549841 | orchestrator | } 2026-04-09 00:48:24.549844 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:48:24.549848 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:24.549851 | orchestrator | } 2026-04-09 00:48:24.549855 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:48:24.549858 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:24.549862 | orchestrator | } 2026-04-09 00:48:24.549866 | orchestrator | 2026-04-09 00:48:24.549870 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:48:24.549876 | orchestrator | Thursday 09 April 2026 00:46:44 +0000 (0:00:00.693) 0:01:00.020 ******** 2026-04-09 00:48:24.549883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.549887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549891 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.549899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549907 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:48:24.549910 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:24.549917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.549923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.549940 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:24.549946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.549966 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:24.549972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.549977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.549997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.550002 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:24.550007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.550052 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:24.550063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:48:24.550069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.550075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:48:24.550081 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:24.550087 | orchestrator | 2026-04-09 00:48:24.550093 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-09 00:48:24.550099 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:02.586) 0:01:02.607 ******** 2026-04-09 00:48:24.550110 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.550115 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.550120 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.550126 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.550132 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.550138 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.550144 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.550150 | orchestrator | 2026-04-09 00:48:24.550156 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-09 00:48:24.550162 | orchestrator | Thursday 09 April 2026 00:46:49 +0000 (0:00:02.124) 0:01:04.731 ******** 2026-04-09 00:48:24.550168 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.550174 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.550180 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.550185 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.550191 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.550197 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.550203 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.550209 | orchestrator | 2026-04-09 00:48:24.550215 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:48:24.550220 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:01.401) 0:01:06.133 ******** 2026-04-09 00:48:24.550227 | orchestrator | 2026-04-09 00:48:24.550232 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:48:24.550239 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.071) 0:01:06.204 ******** 2026-04-09 00:48:24.550245 | orchestrator | 2026-04-09 00:48:24.550251 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:48:24.550257 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.067) 0:01:06.271 ******** 2026-04-09 00:48:24.550263 | orchestrator | 2026-04-09 00:48:24.550273 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:48:24.550278 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.074) 0:01:06.346 ******** 2026-04-09 00:48:24.550284 | orchestrator | 2026-04-09 00:48:24.550290 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:48:24.550295 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.069) 0:01:06.416 ******** 2026-04-09 00:48:24.550300 | orchestrator | 2026-04-09 00:48:24.550306 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:48:24.550312 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.061) 0:01:06.477 ******** 2026-04-09 00:48:24.550318 | orchestrator | 2026-04-09 00:48:24.550323 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:48:24.550329 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.080) 0:01:06.558 ******** 2026-04-09 00:48:24.550334 | orchestrator | 2026-04-09 00:48:24.550340 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-09 00:48:24.550345 | orchestrator | Thursday 09 April 2026 00:46:51 +0000 (0:00:00.101) 0:01:06.659 ******** 2026-04-09 00:48:24.550351 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.550356 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.550364 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.550370 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.550376 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.550382 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.550388 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.550394 | orchestrator | 2026-04-09 00:48:24.550399 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-09 00:48:24.550405 | orchestrator | Thursday 09 April 2026 00:47:20 +0000 (0:00:29.372) 0:01:36.032 ******** 2026-04-09 00:48:24.550411 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.550416 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.550425 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.550434 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.550440 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.550446 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.550451 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.550457 | orchestrator | 2026-04-09 00:48:24.550462 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-09 00:48:24.550468 | orchestrator | Thursday 09 April 2026 00:48:10 +0000 (0:00:49.693) 0:02:25.726 ******** 2026-04-09 00:48:24.550473 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:24.550478 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:24.550512 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:24.550518 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:24.550523 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:24.550528 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:24.550533 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:24.550539 | orchestrator | 2026-04-09 00:48:24.550544 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-09 00:48:24.550549 | orchestrator | Thursday 09 April 2026 00:48:12 +0000 (0:00:02.035) 0:02:27.761 ******** 2026-04-09 00:48:24.550555 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:24.550560 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:24.550566 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:24.550571 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:24.550576 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:24.550582 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:24.550587 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:24.550592 | orchestrator | 2026-04-09 00:48:24.550597 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:48:24.550603 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:48:24.550609 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:48:24.550615 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:48:24.550620 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:48:24.550625 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:48:24.550630 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:48:24.550635 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:48:24.550640 | orchestrator | 2026-04-09 00:48:24.550646 | orchestrator | 2026-04-09 00:48:24.550651 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:48:24.550657 | orchestrator | Thursday 09 April 2026 00:48:21 +0000 (0:00:09.788) 0:02:37.550 ******** 2026-04-09 00:48:24.550662 | orchestrator | =============================================================================== 2026-04-09 00:48:24.550668 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 49.69s 2026-04-09 00:48:24.550673 | orchestrator | common : Restart fluentd container ------------------------------------- 29.37s 2026-04-09 00:48:24.550678 | orchestrator | common : Restart cron container ----------------------------------------- 9.79s 2026-04-09 00:48:24.550683 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.43s 2026-04-09 00:48:24.550693 | orchestrator | common : Copying over config.json files for services -------------------- 5.21s 2026-04-09 00:48:24.550702 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.74s 2026-04-09 00:48:24.550707 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.54s 2026-04-09 00:48:24.550713 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.14s 2026-04-09 00:48:24.550718 | orchestrator | service-check-containers : common | Check containers -------------------- 3.82s 2026-04-09 00:48:24.550725 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.29s 2026-04-09 00:48:24.550730 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.12s 2026-04-09 00:48:24.550735 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.81s 2026-04-09 00:48:24.550741 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.59s 2026-04-09 00:48:24.550747 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.58s 2026-04-09 00:48:24.550752 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.57s 2026-04-09 00:48:24.550757 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.56s 2026-04-09 00:48:24.550763 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.53s 2026-04-09 00:48:24.550769 | orchestrator | common : Creating log volume -------------------------------------------- 2.12s 2026-04-09 00:48:24.550774 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.04s 2026-04-09 00:48:24.550780 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.73s 2026-04-09 00:48:24.550788 | orchestrator | 2026-04-09 00:48:24 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:24.550793 | orchestrator | 2026-04-09 00:48:24 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:24.550798 | orchestrator | 2026-04-09 00:48:24 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:24.550803 | orchestrator | 2026-04-09 00:48:24 | INFO  | Task 244d10fa-7b1b-434b-8889-2c010eda7c66 is in state STARTED 2026-04-09 00:48:24.550808 | orchestrator | 2026-04-09 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:27.575341 | orchestrator | 2026-04-09 00:48:27 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:27.575410 | orchestrator | 2026-04-09 00:48:27 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:27.575418 | orchestrator | 2026-04-09 00:48:27 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:27.575424 | orchestrator | 2026-04-09 00:48:27 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:27.575429 | orchestrator | 2026-04-09 00:48:27 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:27.575434 | orchestrator | 2026-04-09 00:48:27 | INFO  | Task 244d10fa-7b1b-434b-8889-2c010eda7c66 is in state STARTED 2026-04-09 00:48:27.575439 | orchestrator | 2026-04-09 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:30.598576 | orchestrator | 2026-04-09 00:48:30 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:30.598641 | orchestrator | 2026-04-09 00:48:30 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:30.599256 | orchestrator | 2026-04-09 00:48:30 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:30.600595 | orchestrator | 2026-04-09 00:48:30 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:30.600626 | orchestrator | 2026-04-09 00:48:30 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:30.601181 | orchestrator | 2026-04-09 00:48:30 | INFO  | Task 244d10fa-7b1b-434b-8889-2c010eda7c66 is in state STARTED 2026-04-09 00:48:30.601203 | orchestrator | 2026-04-09 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:33.630219 | orchestrator | 2026-04-09 00:48:33 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:33.630275 | orchestrator | 2026-04-09 00:48:33 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:33.631323 | orchestrator | 2026-04-09 00:48:33 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:33.631945 | orchestrator | 2026-04-09 00:48:33 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:33.632617 | orchestrator | 2026-04-09 00:48:33 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:33.633563 | orchestrator | 2026-04-09 00:48:33 | INFO  | Task 244d10fa-7b1b-434b-8889-2c010eda7c66 is in state STARTED 2026-04-09 00:48:33.634907 | orchestrator | 2026-04-09 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:36.663191 | orchestrator | 2026-04-09 00:48:36 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:36.663557 | orchestrator | 2026-04-09 00:48:36 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:36.664323 | orchestrator | 2026-04-09 00:48:36 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:36.665608 | orchestrator | 2026-04-09 00:48:36 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:36.667710 | orchestrator | 2026-04-09 00:48:36 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:36.668338 | orchestrator | 2026-04-09 00:48:36 | INFO  | Task 244d10fa-7b1b-434b-8889-2c010eda7c66 is in state STARTED 2026-04-09 00:48:36.668380 | orchestrator | 2026-04-09 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:39.721305 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:39.721376 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:39.723825 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:39.739218 | orchestrator | 2026-04-09 00:48:39.739297 | orchestrator | 2026-04-09 00:48:39.739308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:48:39.739318 | orchestrator | 2026-04-09 00:48:39.739326 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:48:39.739334 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.420) 0:00:00.420 ******** 2026-04-09 00:48:39.739342 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:39.739351 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:39.739359 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:39.739366 | orchestrator | 2026-04-09 00:48:39.739374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:48:39.739381 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.351) 0:00:00.771 ******** 2026-04-09 00:48:39.739389 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-09 00:48:39.739397 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-09 00:48:39.739404 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-09 00:48:39.739412 | orchestrator | 2026-04-09 00:48:39.739419 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-09 00:48:39.739426 | orchestrator | 2026-04-09 00:48:39.739456 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-09 00:48:39.739464 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.502) 0:00:01.274 ******** 2026-04-09 00:48:39.739498 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:48:39.739508 | orchestrator | 2026-04-09 00:48:39.739516 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-09 00:48:39.739523 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.568) 0:00:01.842 ******** 2026-04-09 00:48:39.739530 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 00:48:39.739538 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 00:48:39.739545 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 00:48:39.739553 | orchestrator | 2026-04-09 00:48:39.739560 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-09 00:48:39.739567 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:01.637) 0:00:03.479 ******** 2026-04-09 00:48:39.739575 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 00:48:39.739582 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 00:48:39.739589 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 00:48:39.739597 | orchestrator | 2026-04-09 00:48:39.739604 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-09 00:48:39.739612 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:01.902) 0:00:05.381 ******** 2026-04-09 00:48:39.739623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:48:39.739633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:48:39.739668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:48:39.739677 | orchestrator | 2026-04-09 00:48:39.739685 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-09 00:48:39.739699 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:01.257) 0:00:06.639 ******** 2026-04-09 00:48:39.739706 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:48:39.739714 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:39.739721 | orchestrator | } 2026-04-09 00:48:39.739729 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:48:39.739737 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:39.739745 | orchestrator | } 2026-04-09 00:48:39.739755 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:48:39.739763 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:39.739771 | orchestrator | } 2026-04-09 00:48:39.739780 | orchestrator | 2026-04-09 00:48:39.739789 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:48:39.739802 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:00.329) 0:00:06.968 ******** 2026-04-09 00:48:39.739814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:48:39.739827 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:39.739839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:48:39.739851 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:39.739864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:48:39.739878 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:39.739890 | orchestrator | 2026-04-09 00:48:39.739904 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-09 00:48:39.739917 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:01.411) 0:00:08.379 ******** 2026-04-09 00:48:39.739930 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:39.739939 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:39.739947 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:39.739956 | orchestrator | 2026-04-09 00:48:39.739964 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:48:39.739975 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:48:39.740000 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:48:39.740009 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:48:39.740017 | orchestrator | 2026-04-09 00:48:39.740026 | orchestrator | 2026-04-09 00:48:39.740035 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:48:39.740048 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:04.043) 0:00:12.423 ******** 2026-04-09 00:48:39.740064 | orchestrator | =============================================================================== 2026-04-09 00:48:39.740073 | orchestrator | memcached : Restart memcached container --------------------------------- 4.04s 2026-04-09 00:48:39.740081 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.90s 2026-04-09 00:48:39.740090 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.64s 2026-04-09 00:48:39.740098 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.41s 2026-04-09 00:48:39.740106 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.26s 2026-04-09 00:48:39.740115 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.57s 2026-04-09 00:48:39.740123 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-04-09 00:48:39.740132 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-09 00:48:39.740141 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.33s 2026-04-09 00:48:39.740150 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:39.740159 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:39.740167 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 244d10fa-7b1b-434b-8889-2c010eda7c66 is in state SUCCESS 2026-04-09 00:48:39.740175 | orchestrator | 2026-04-09 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:42.763119 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:42.764136 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:42.764205 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:48:42.764951 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:42.765788 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:42.766356 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:42.766584 | orchestrator | 2026-04-09 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:45.803578 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:45.803810 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:45.804940 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:48:45.806000 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:45.807185 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state STARTED 2026-04-09 00:48:45.808218 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:45.808721 | orchestrator | 2026-04-09 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:48.843429 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:48.843729 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:48.846339 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:48:48.846758 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:48.847369 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task 603fac1a-12bc-4418-adb4-7b469ba7bdbf is in state SUCCESS 2026-04-09 00:48:48.848630 | orchestrator | 2026-04-09 00:48:48.848668 | orchestrator | 2026-04-09 00:48:48.848676 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:48:48.848684 | orchestrator | 2026-04-09 00:48:48.848691 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:48:48.848698 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.507) 0:00:00.507 ******** 2026-04-09 00:48:48.848705 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:48.848713 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:48.848720 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:48.848726 | orchestrator | 2026-04-09 00:48:48.848734 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:48:48.848740 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.306) 0:00:00.813 ******** 2026-04-09 00:48:48.848747 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-09 00:48:48.848782 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-09 00:48:48.848799 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-09 00:48:48.848811 | orchestrator | 2026-04-09 00:48:48.848821 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-09 00:48:48.848831 | orchestrator | 2026-04-09 00:48:48.848842 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-09 00:48:48.848852 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.398) 0:00:01.212 ******** 2026-04-09 00:48:48.848862 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:48:48.848877 | orchestrator | 2026-04-09 00:48:48.848889 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-09 00:48:48.848899 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.863) 0:00:02.075 ******** 2026-04-09 00:48:48.848913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.848930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.848964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.848976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849032 | orchestrator | 2026-04-09 00:48:48.849039 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-09 00:48:48.849045 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:01.750) 0:00:03.828 ******** 2026-04-09 00:48:48.849052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849103 | orchestrator | 2026-04-09 00:48:48.849110 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-09 00:48:48.849116 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:02.689) 0:00:06.518 ******** 2026-04-09 00:48:48.849123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849179 | orchestrator | 2026-04-09 00:48:48.849198 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-09 00:48:48.849204 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:02.546) 0:00:09.065 ******** 2026-04-09 00:48:48.849214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:48.849280 | orchestrator | 2026-04-09 00:48:48.849288 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-09 00:48:48.849295 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:02.484) 0:00:11.550 ******** 2026-04-09 00:48:48.849303 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:48:48.849310 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:48.849318 | orchestrator | } 2026-04-09 00:48:48.849325 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:48:48.849332 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:48.849339 | orchestrator | } 2026-04-09 00:48:48.849350 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:48:48.849357 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:48:48.849365 | orchestrator | } 2026-04-09 00:48:48.849372 | orchestrator | 2026-04-09 00:48:48.849379 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:48:48.849386 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:01.093) 0:00:12.643 ******** 2026-04-09 00:48:48.849394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 00:48:48.849407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 00:48:48.849414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 00:48:48.849423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 00:48:48.849430 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:48.849437 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:48.849445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 00:48:48.849457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 00:48:48.849464 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:48.849493 | orchestrator | 2026-04-09 00:48:48.849509 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:48:48.849516 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:01.124) 0:00:13.767 ******** 2026-04-09 00:48:48.849524 | orchestrator | 2026-04-09 00:48:48.849531 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:48:48.849538 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:00.050) 0:00:13.817 ******** 2026-04-09 00:48:48.849546 | orchestrator | 2026-04-09 00:48:48.849553 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:48:48.849560 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:00.065) 0:00:13.883 ******** 2026-04-09 00:48:48.849568 | orchestrator | 2026-04-09 00:48:48.849575 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-09 00:48:48.849583 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:00.086) 0:00:13.969 ******** 2026-04-09 00:48:48.849590 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:48.849597 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:48.849603 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:48.849610 | orchestrator | 2026-04-09 00:48:48.849616 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-09 00:48:48.849622 | orchestrator | Thursday 09 April 2026 00:48:43 +0000 (0:00:03.347) 0:00:17.318 ******** 2026-04-09 00:48:48.849629 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:48.849635 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:48.849641 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:48.849648 | orchestrator | 2026-04-09 00:48:48.849654 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:48:48.849662 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:48:48.849670 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:48:48.849676 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:48:48.849683 | orchestrator | 2026-04-09 00:48:48.849689 | orchestrator | 2026-04-09 00:48:48.849695 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:48:48.849702 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:03.539) 0:00:20.857 ******** 2026-04-09 00:48:48.849708 | orchestrator | =============================================================================== 2026-04-09 00:48:48.849714 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.54s 2026-04-09 00:48:48.849721 | orchestrator | redis : Restart redis container ----------------------------------------- 3.35s 2026-04-09 00:48:48.849727 | orchestrator | redis : Copying over default config.json files -------------------------- 2.69s 2026-04-09 00:48:48.849733 | orchestrator | redis : Copying over redis config files --------------------------------- 2.55s 2026-04-09 00:48:48.849740 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.48s 2026-04-09 00:48:48.849746 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.75s 2026-04-09 00:48:48.849752 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.11s 2026-04-09 00:48:48.849759 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.11s 2026-04-09 00:48:48.849765 | orchestrator | redis : include_tasks --------------------------------------------------- 0.87s 2026-04-09 00:48:48.849771 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-04-09 00:48:48.849777 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-09 00:48:48.849784 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-04-09 00:48:48.849790 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:48.849865 | orchestrator | 2026-04-09 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:51.882235 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:51.882327 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:51.883084 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:48:51.883388 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:51.883450 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:51.883459 | orchestrator | 2026-04-09 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:54.907818 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:54.909440 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:54.909790 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:48:54.910620 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:54.911431 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:54.911488 | orchestrator | 2026-04-09 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:57.945543 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:48:57.960226 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:48:57.960294 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:48:57.960300 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:48:57.960305 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:48:57.960310 | orchestrator | 2026-04-09 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:00.977649 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:00.977970 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:00.978737 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:00.979286 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:00.980100 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:00.980133 | orchestrator | 2026-04-09 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:04.047281 | orchestrator | 2026-04-09 00:49:04 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:04.048564 | orchestrator | 2026-04-09 00:49:04 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:04.050968 | orchestrator | 2026-04-09 00:49:04 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:04.051038 | orchestrator | 2026-04-09 00:49:04 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:04.052135 | orchestrator | 2026-04-09 00:49:04 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:04.052564 | orchestrator | 2026-04-09 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:07.073845 | orchestrator | 2026-04-09 00:49:07 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:07.076612 | orchestrator | 2026-04-09 00:49:07 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:07.076664 | orchestrator | 2026-04-09 00:49:07 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:07.077036 | orchestrator | 2026-04-09 00:49:07 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:07.077716 | orchestrator | 2026-04-09 00:49:07 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:07.077803 | orchestrator | 2026-04-09 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:10.114117 | orchestrator | 2026-04-09 00:49:10 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:10.115088 | orchestrator | 2026-04-09 00:49:10 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:10.115508 | orchestrator | 2026-04-09 00:49:10 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:10.116273 | orchestrator | 2026-04-09 00:49:10 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:10.117344 | orchestrator | 2026-04-09 00:49:10 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:10.117370 | orchestrator | 2026-04-09 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:13.141027 | orchestrator | 2026-04-09 00:49:13 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:13.141430 | orchestrator | 2026-04-09 00:49:13 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:13.143019 | orchestrator | 2026-04-09 00:49:13 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:13.143533 | orchestrator | 2026-04-09 00:49:13 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:13.144264 | orchestrator | 2026-04-09 00:49:13 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:13.144337 | orchestrator | 2026-04-09 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:16.171427 | orchestrator | 2026-04-09 00:49:16 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:16.173011 | orchestrator | 2026-04-09 00:49:16 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:16.174714 | orchestrator | 2026-04-09 00:49:16 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:16.176367 | orchestrator | 2026-04-09 00:49:16 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:16.177877 | orchestrator | 2026-04-09 00:49:16 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:16.177918 | orchestrator | 2026-04-09 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:19.204162 | orchestrator | 2026-04-09 00:49:19 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:19.206274 | orchestrator | 2026-04-09 00:49:19 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:19.206782 | orchestrator | 2026-04-09 00:49:19 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:19.207300 | orchestrator | 2026-04-09 00:49:19 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:19.207989 | orchestrator | 2026-04-09 00:49:19 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:19.208018 | orchestrator | 2026-04-09 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:22.237371 | orchestrator | 2026-04-09 00:49:22 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:22.238411 | orchestrator | 2026-04-09 00:49:22 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:22.239734 | orchestrator | 2026-04-09 00:49:22 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:22.242055 | orchestrator | 2026-04-09 00:49:22 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:22.242856 | orchestrator | 2026-04-09 00:49:22 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:22.242882 | orchestrator | 2026-04-09 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:25.281469 | orchestrator | 2026-04-09 00:49:25 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:25.283922 | orchestrator | 2026-04-09 00:49:25 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:25.287694 | orchestrator | 2026-04-09 00:49:25 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:25.288838 | orchestrator | 2026-04-09 00:49:25 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:25.290118 | orchestrator | 2026-04-09 00:49:25 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:25.290146 | orchestrator | 2026-04-09 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:28.335938 | orchestrator | 2026-04-09 00:49:28 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:28.338251 | orchestrator | 2026-04-09 00:49:28 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:28.338328 | orchestrator | 2026-04-09 00:49:28 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:28.339533 | orchestrator | 2026-04-09 00:49:28 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state STARTED 2026-04-09 00:49:28.340540 | orchestrator | 2026-04-09 00:49:28 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:28.340592 | orchestrator | 2026-04-09 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:31.370349 | orchestrator | 2026-04-09 00:49:31 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:31.371086 | orchestrator | 2026-04-09 00:49:31 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:31.372593 | orchestrator | 2026-04-09 00:49:31 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:31.372743 | orchestrator | 2026-04-09 00:49:31 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:31.375124 | orchestrator | 2026-04-09 00:49:31 | INFO  | Task 732eb9b0-3446-48d0-98c8-3a3130dac216 is in state SUCCESS 2026-04-09 00:49:31.376653 | orchestrator | 2026-04-09 00:49:31.376719 | orchestrator | 2026-04-09 00:49:31.376729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:49:31.376737 | orchestrator | 2026-04-09 00:49:31.376744 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:49:31.376751 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.541) 0:00:00.541 ******** 2026-04-09 00:49:31.376758 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:49:31.376765 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:49:31.376772 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:49:31.376778 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:49:31.376784 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:49:31.376790 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:49:31.376797 | orchestrator | 2026-04-09 00:49:31.376804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:49:31.376810 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.880) 0:00:01.421 ******** 2026-04-09 00:49:31.376816 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:49:31.376823 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:49:31.376829 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:49:31.376834 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:49:31.376841 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:49:31.376846 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:49:31.376852 | orchestrator | 2026-04-09 00:49:31.376857 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-09 00:49:31.376863 | orchestrator | 2026-04-09 00:49:31.376869 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-09 00:49:31.376875 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.786) 0:00:02.207 ******** 2026-04-09 00:49:31.376882 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:49:31.376890 | orchestrator | 2026-04-09 00:49:31.376897 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 00:49:31.376903 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:01.431) 0:00:03.638 ******** 2026-04-09 00:49:31.376909 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 00:49:31.376915 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 00:49:31.376921 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 00:49:31.376927 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 00:49:31.376933 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 00:49:31.376938 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 00:49:31.376944 | orchestrator | 2026-04-09 00:49:31.376949 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 00:49:31.376957 | orchestrator | Thursday 09 April 2026 00:48:30 +0000 (0:00:01.605) 0:00:05.244 ******** 2026-04-09 00:49:31.376964 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 00:49:31.376971 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 00:49:31.376977 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 00:49:31.376983 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 00:49:31.376989 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 00:49:31.376995 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 00:49:31.377001 | orchestrator | 2026-04-09 00:49:31.377006 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 00:49:31.377013 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:01.771) 0:00:07.016 ******** 2026-04-09 00:49:31.377038 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-09 00:49:31.377045 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-09 00:49:31.377051 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:49:31.377058 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-09 00:49:31.377064 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:49:31.377071 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-09 00:49:31.377077 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:49:31.377082 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-09 00:49:31.377088 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:49:31.377093 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:49:31.377099 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-09 00:49:31.377105 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:49:31.377111 | orchestrator | 2026-04-09 00:49:31.377117 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-09 00:49:31.377122 | orchestrator | Thursday 09 April 2026 00:48:33 +0000 (0:00:01.244) 0:00:08.260 ******** 2026-04-09 00:49:31.377129 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:49:31.377135 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:49:31.377143 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:49:31.377149 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:49:31.377214 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:49:31.377220 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:49:31.377225 | orchestrator | 2026-04-09 00:49:31.377246 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-09 00:49:31.377253 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:00.949) 0:00:09.210 ******** 2026-04-09 00:49:31.377282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377403 | orchestrator | 2026-04-09 00:49:31.377411 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-09 00:49:31.377418 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:02.496) 0:00:11.706 ******** 2026-04-09 00:49:31.377425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377703 | orchestrator | 2026-04-09 00:49:31.377710 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-09 00:49:31.377717 | orchestrator | Thursday 09 April 2026 00:48:40 +0000 (0:00:03.718) 0:00:15.425 ******** 2026-04-09 00:49:31.377724 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:49:31.377732 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:49:31.377739 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:49:31.377746 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:49:31.377753 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:49:31.377760 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:49:31.377767 | orchestrator | 2026-04-09 00:49:31.377774 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-09 00:49:31.377781 | orchestrator | Thursday 09 April 2026 00:48:41 +0000 (0:00:00.600) 0:00:16.026 ******** 2026-04-09 00:49:31.377793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:49:31.377901 | orchestrator | 2026-04-09 00:49:31.377908 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-09 00:49:31.377915 | orchestrator | Thursday 09 April 2026 00:48:44 +0000 (0:00:02.634) 0:00:18.661 ******** 2026-04-09 00:49:31.377921 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:49:31.377929 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:49:31.377936 | orchestrator | } 2026-04-09 00:49:31.377943 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:49:31.377950 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:49:31.377957 | orchestrator | } 2026-04-09 00:49:31.377964 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:49:31.377971 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:49:31.377978 | orchestrator | } 2026-04-09 00:49:31.377985 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:49:31.377992 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:49:31.377998 | orchestrator | } 2026-04-09 00:49:31.378005 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:49:31.378270 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:49:31.378293 | orchestrator | } 2026-04-09 00:49:31.378301 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:49:31.378308 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:49:31.378315 | orchestrator | } 2026-04-09 00:49:31.378323 | orchestrator | 2026-04-09 00:49:31.378331 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:49:31.378339 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:00.880) 0:00:19.541 ******** 2026-04-09 00:49:31.378347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:49:31.378357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:49:31.378364 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:49:31.378657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:49:31.378684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:49:31.378691 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:49:31.378698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:49:31.378706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:49:31.378713 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:49:31.378719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:49:31.378731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:49:31.378744 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:49:31.378759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:49:31.378767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:49:31.378773 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:49:31.378780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:49:31.378786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:49:31.378793 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:49:31.378800 | orchestrator | 2026-04-09 00:49:31.378807 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:49:31.378813 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:02.112) 0:00:21.654 ******** 2026-04-09 00:49:31.378819 | orchestrator | 2026-04-09 00:49:31.378825 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:49:31.378831 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.267) 0:00:21.921 ******** 2026-04-09 00:49:31.378837 | orchestrator | 2026-04-09 00:49:31.378842 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:49:31.378849 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.201) 0:00:22.122 ******** 2026-04-09 00:49:31.378855 | orchestrator | 2026-04-09 00:49:31.378862 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:49:31.378875 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.138) 0:00:22.261 ******** 2026-04-09 00:49:31.378881 | orchestrator | 2026-04-09 00:49:31.378888 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:49:31.378979 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.113) 0:00:22.375 ******** 2026-04-09 00:49:31.378988 | orchestrator | 2026-04-09 00:49:31.378996 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:49:31.379003 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:00.366) 0:00:22.741 ******** 2026-04-09 00:49:31.379009 | orchestrator | 2026-04-09 00:49:31.379016 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-09 00:49:31.379030 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:00.336) 0:00:23.078 ******** 2026-04-09 00:49:31.379038 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:49:31.379045 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:49:31.379051 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:49:31.379058 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:49:31.379065 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:49:31.379072 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:49:31.379079 | orchestrator | 2026-04-09 00:49:31.379086 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-09 00:49:31.379098 | orchestrator | Thursday 09 April 2026 00:48:57 +0000 (0:00:09.111) 0:00:32.189 ******** 2026-04-09 00:49:31.379105 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:49:31.379113 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:49:31.379120 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:49:31.379127 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:49:31.379133 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:49:31.379138 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:49:31.379145 | orchestrator | 2026-04-09 00:49:31.379151 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-09 00:49:31.379157 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:01.595) 0:00:33.785 ******** 2026-04-09 00:49:31.379163 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:49:31.379169 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:49:31.379175 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:49:31.379181 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:49:31.379186 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:49:31.379192 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:49:31.379198 | orchestrator | 2026-04-09 00:49:31.379204 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-09 00:49:31.379211 | orchestrator | Thursday 09 April 2026 00:49:08 +0000 (0:00:08.727) 0:00:42.512 ******** 2026-04-09 00:49:31.379217 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-09 00:49:31.379224 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-09 00:49:31.379230 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-09 00:49:31.379237 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-09 00:49:31.379243 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-09 00:49:31.379249 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-09 00:49:31.379258 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-09 00:49:31.379267 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-09 00:49:31.379276 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-09 00:49:31.379290 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-09 00:49:31.379300 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-09 00:49:31.379308 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-09 00:49:31.379317 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:49:31.379326 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:49:31.379335 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:49:31.379344 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:49:31.379352 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:49:31.379361 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:49:31.379370 | orchestrator | 2026-04-09 00:49:31.379378 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-09 00:49:31.379387 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:06.888) 0:00:49.401 ******** 2026-04-09 00:49:31.379397 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-09 00:49:31.379406 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:49:31.379416 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-09 00:49:31.379424 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:49:31.379434 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-09 00:49:31.379465 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:49:31.379473 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-09 00:49:31.379482 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-09 00:49:31.379491 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-09 00:49:31.379501 | orchestrator | 2026-04-09 00:49:31.379510 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-09 00:49:31.379519 | orchestrator | Thursday 09 April 2026 00:49:17 +0000 (0:00:02.567) 0:00:51.969 ******** 2026-04-09 00:49:31.379533 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-09 00:49:31.379541 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:49:31.379550 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-09 00:49:31.379559 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:49:31.379568 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-09 00:49:31.379577 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:49:31.379586 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-09 00:49:31.379600 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-09 00:49:31.379609 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-09 00:49:31.379617 | orchestrator | 2026-04-09 00:49:31.379626 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-09 00:49:31.379632 | orchestrator | Thursday 09 April 2026 00:49:21 +0000 (0:00:03.542) 0:00:55.511 ******** 2026-04-09 00:49:31.379638 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:49:31.379644 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:49:31.379650 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:49:31.379657 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:49:31.379663 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:49:31.379669 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:49:31.379675 | orchestrator | 2026-04-09 00:49:31.379687 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:49:31.379694 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:49:31.379701 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:49:31.379708 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:49:31.379714 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:49:31.379720 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:49:31.379727 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:49:31.379733 | orchestrator | 2026-04-09 00:49:31.379739 | orchestrator | 2026-04-09 00:49:31.379745 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:49:31.379751 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:08.157) 0:01:03.669 ******** 2026-04-09 00:49:31.379758 | orchestrator | =============================================================================== 2026-04-09 00:49:31.379765 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.89s 2026-04-09 00:49:31.379771 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.11s 2026-04-09 00:49:31.379777 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.89s 2026-04-09 00:49:31.379783 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.72s 2026-04-09 00:49:31.379790 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.54s 2026-04-09 00:49:31.379796 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.63s 2026-04-09 00:49:31.379843 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.57s 2026-04-09 00:49:31.379850 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.50s 2026-04-09 00:49:31.379857 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.11s 2026-04-09 00:49:31.379863 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.77s 2026-04-09 00:49:31.379868 | orchestrator | module-load : Load modules ---------------------------------------------- 1.61s 2026-04-09 00:49:31.379875 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.60s 2026-04-09 00:49:31.379881 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.43s 2026-04-09 00:49:31.379887 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.42s 2026-04-09 00:49:31.379893 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.24s 2026-04-09 00:49:31.379899 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.95s 2026-04-09 00:49:31.379906 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.88s 2026-04-09 00:49:31.379912 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2026-04-09 00:49:31.379918 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-04-09 00:49:31.379924 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.60s 2026-04-09 00:49:31.379930 | orchestrator | 2026-04-09 00:49:31 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:31.379936 | orchestrator | 2026-04-09 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:34.421792 | orchestrator | 2026-04-09 00:49:34 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:34.423354 | orchestrator | 2026-04-09 00:49:34 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:34.423894 | orchestrator | 2026-04-09 00:49:34 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:34.425988 | orchestrator | 2026-04-09 00:49:34 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:34.428033 | orchestrator | 2026-04-09 00:49:34 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:34.428067 | orchestrator | 2026-04-09 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:37.468805 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:37.470397 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:37.473119 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:37.476764 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:37.479007 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:37.480806 | orchestrator | 2026-04-09 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:40.602773 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:40.602836 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:40.602844 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:40.602850 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:40.602855 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:40.602860 | orchestrator | 2026-04-09 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:43.630602 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:43.642077 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:43.642136 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:43.642144 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:43.642151 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:43.642157 | orchestrator | 2026-04-09 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:46.661420 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:46.662494 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:46.663273 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:46.665635 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:46.668420 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:46.668496 | orchestrator | 2026-04-09 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:49.699152 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:49.701491 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:49.702086 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:49.703076 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:49.703604 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:49.703756 | orchestrator | 2026-04-09 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:52.812386 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:52.812518 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:52.815036 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:52.818802 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:52.820095 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:52.820132 | orchestrator | 2026-04-09 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:55.909955 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:55.910114 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:55.910130 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:55.910140 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:55.910149 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:55.910159 | orchestrator | 2026-04-09 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:58.922372 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:49:58.922514 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:49:58.922528 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:49:58.922536 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:49:58.922543 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:49:58.922550 | orchestrator | 2026-04-09 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:01.965220 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:01.965372 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:50:01.965934 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:01.966561 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:01.968152 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:01.968180 | orchestrator | 2026-04-09 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:05.030278 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:05.031231 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state STARTED 2026-04-09 00:50:05.032144 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:05.033166 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:05.034260 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:05.034335 | orchestrator | 2026-04-09 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:08.064257 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:08.065259 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task d9396dc1-b8d0-4769-b0b0-b9a040f7a15d is in state SUCCESS 2026-04-09 00:50:08.066781 | orchestrator | 2026-04-09 00:50:08.066817 | orchestrator | 2026-04-09 00:50:08.066823 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-09 00:50:08.066828 | orchestrator | 2026-04-09 00:50:08.066832 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-09 00:50:08.066837 | orchestrator | Thursday 09 April 2026 00:45:45 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-09 00:50:08.066842 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.066847 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.066851 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.066855 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.066863 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.066867 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.066871 | orchestrator | 2026-04-09 00:50:08.066875 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-09 00:50:08.066879 | orchestrator | Thursday 09 April 2026 00:45:45 +0000 (0:00:00.601) 0:00:00.878 ******** 2026-04-09 00:50:08.066883 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.066888 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.066892 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.066896 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.066900 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.066904 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.066907 | orchestrator | 2026-04-09 00:50:08.066911 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-09 00:50:08.066915 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:00.726) 0:00:01.605 ******** 2026-04-09 00:50:08.066919 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.066923 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.066927 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.066931 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.066935 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.066939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.066943 | orchestrator | 2026-04-09 00:50:08.066947 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-09 00:50:08.066950 | orchestrator | Thursday 09 April 2026 00:45:47 +0000 (0:00:00.520) 0:00:02.125 ******** 2026-04-09 00:50:08.066954 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.066972 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.066976 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.066980 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.066984 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.066988 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.066992 | orchestrator | 2026-04-09 00:50:08.066995 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-09 00:50:08.066999 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:02.177) 0:00:04.303 ******** 2026-04-09 00:50:08.067003 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.067007 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.067011 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.067015 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.067018 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.067022 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.067026 | orchestrator | 2026-04-09 00:50:08.067030 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-09 00:50:08.067034 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:01.088) 0:00:05.391 ******** 2026-04-09 00:50:08.067038 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.067041 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.067045 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.067049 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.067053 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.067057 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.067060 | orchestrator | 2026-04-09 00:50:08.067064 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-09 00:50:08.067068 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:02.278) 0:00:07.670 ******** 2026-04-09 00:50:08.067072 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067076 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067080 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067083 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067087 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067091 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067095 | orchestrator | 2026-04-09 00:50:08.067099 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-09 00:50:08.067103 | orchestrator | Thursday 09 April 2026 00:45:53 +0000 (0:00:01.208) 0:00:08.878 ******** 2026-04-09 00:50:08.067107 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067111 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067115 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067118 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067122 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067126 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067130 | orchestrator | 2026-04-09 00:50:08.067134 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-09 00:50:08.067138 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:00.922) 0:00:09.801 ******** 2026-04-09 00:50:08.067141 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.067145 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.067149 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067153 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.067157 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.067161 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067165 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.067168 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.067172 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067182 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.067195 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.067200 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067203 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.067207 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.067211 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067215 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.067221 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.067225 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067228 | orchestrator | 2026-04-09 00:50:08.067238 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-09 00:50:08.067242 | orchestrator | Thursday 09 April 2026 00:45:55 +0000 (0:00:00.990) 0:00:10.791 ******** 2026-04-09 00:50:08.067246 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067250 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067254 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067257 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067261 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067265 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067269 | orchestrator | 2026-04-09 00:50:08.067273 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-09 00:50:08.067277 | orchestrator | Thursday 09 April 2026 00:45:57 +0000 (0:00:01.098) 0:00:11.890 ******** 2026-04-09 00:50:08.067281 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.067285 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.067289 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.067293 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.067297 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.067301 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.067304 | orchestrator | 2026-04-09 00:50:08.067308 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-09 00:50:08.067312 | orchestrator | Thursday 09 April 2026 00:45:57 +0000 (0:00:00.728) 0:00:12.618 ******** 2026-04-09 00:50:08.067316 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.067320 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.067324 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.067328 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.067332 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.067335 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.067339 | orchestrator | 2026-04-09 00:50:08.067343 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-09 00:50:08.067348 | orchestrator | Thursday 09 April 2026 00:46:03 +0000 (0:00:05.849) 0:00:18.468 ******** 2026-04-09 00:50:08.067352 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067357 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067362 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067366 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067371 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067376 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067380 | orchestrator | 2026-04-09 00:50:08.067385 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-09 00:50:08.067389 | orchestrator | Thursday 09 April 2026 00:46:04 +0000 (0:00:01.231) 0:00:19.700 ******** 2026-04-09 00:50:08.067394 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067399 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067403 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067408 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067474 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067479 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067487 | orchestrator | 2026-04-09 00:50:08.067492 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-09 00:50:08.067498 | orchestrator | Thursday 09 April 2026 00:46:07 +0000 (0:00:02.899) 0:00:22.600 ******** 2026-04-09 00:50:08.067503 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067507 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067512 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067516 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067521 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067526 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067530 | orchestrator | 2026-04-09 00:50:08.067535 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-09 00:50:08.067539 | orchestrator | Thursday 09 April 2026 00:46:09 +0000 (0:00:01.536) 0:00:24.136 ******** 2026-04-09 00:50:08.067545 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-09 00:50:08.067550 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-09 00:50:08.067554 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067559 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-09 00:50:08.067563 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-09 00:50:08.067569 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067573 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-09 00:50:08.067578 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-09 00:50:08.067582 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067587 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-09 00:50:08.067592 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-09 00:50:08.067597 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067601 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-09 00:50:08.067605 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-09 00:50:08.067610 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067614 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-09 00:50:08.067619 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-09 00:50:08.067624 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067628 | orchestrator | 2026-04-09 00:50:08.067633 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-09 00:50:08.067641 | orchestrator | Thursday 09 April 2026 00:46:09 +0000 (0:00:00.750) 0:00:24.886 ******** 2026-04-09 00:50:08.067645 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067650 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067655 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067659 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067664 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067668 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067673 | orchestrator | 2026-04-09 00:50:08.067677 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-09 00:50:08.067684 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:01.247) 0:00:26.134 ******** 2026-04-09 00:50:08.067689 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.067694 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.067698 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.067703 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067707 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067712 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067716 | orchestrator | 2026-04-09 00:50:08.067721 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-09 00:50:08.067725 | orchestrator | 2026-04-09 00:50:08.067730 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-09 00:50:08.067735 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:01.799) 0:00:27.933 ******** 2026-04-09 00:50:08.067743 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.067748 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.067752 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.067757 | orchestrator | 2026-04-09 00:50:08.067760 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-09 00:50:08.067764 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:00.909) 0:00:28.842 ******** 2026-04-09 00:50:08.067768 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.067772 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.067776 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.067780 | orchestrator | 2026-04-09 00:50:08.067786 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-09 00:50:08.067792 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:01.102) 0:00:29.945 ******** 2026-04-09 00:50:08.067798 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.067803 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.067809 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.067815 | orchestrator | 2026-04-09 00:50:08.067821 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-09 00:50:08.067827 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:01.374) 0:00:31.319 ******** 2026-04-09 00:50:08.067833 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.067838 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.067844 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.067849 | orchestrator | 2026-04-09 00:50:08.067855 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-09 00:50:08.067861 | orchestrator | Thursday 09 April 2026 00:46:17 +0000 (0:00:01.535) 0:00:32.855 ******** 2026-04-09 00:50:08.067866 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.067872 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067877 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.067882 | orchestrator | 2026-04-09 00:50:08.067889 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-09 00:50:08.067895 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:00.288) 0:00:33.143 ******** 2026-04-09 00:50:08.067901 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.067908 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.067915 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.067919 | orchestrator | 2026-04-09 00:50:08.067922 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-09 00:50:08.067926 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:00.703) 0:00:33.846 ******** 2026-04-09 00:50:08.067930 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.067934 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.067938 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.067941 | orchestrator | 2026-04-09 00:50:08.067945 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-09 00:50:08.067949 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:01.510) 0:00:35.357 ******** 2026-04-09 00:50:08.067953 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:08.067957 | orchestrator | 2026-04-09 00:50:08.067961 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-09 00:50:08.067965 | orchestrator | Thursday 09 April 2026 00:46:21 +0000 (0:00:00.721) 0:00:36.079 ******** 2026-04-09 00:50:08.067969 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.067972 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.067976 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.067980 | orchestrator | 2026-04-09 00:50:08.067984 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-09 00:50:08.067987 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:01.831) 0:00:37.911 ******** 2026-04-09 00:50:08.067991 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.067995 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068003 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.068007 | orchestrator | 2026-04-09 00:50:08.068011 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-09 00:50:08.068014 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:00.963) 0:00:38.874 ******** 2026-04-09 00:50:08.068018 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.068022 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068026 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.068030 | orchestrator | 2026-04-09 00:50:08.068033 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-09 00:50:08.068037 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:01.151) 0:00:40.026 ******** 2026-04-09 00:50:08.068041 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.068045 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.068049 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068053 | orchestrator | 2026-04-09 00:50:08.068056 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-09 00:50:08.068064 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:01.858) 0:00:41.884 ******** 2026-04-09 00:50:08.068068 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.068071 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.068075 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.068079 | orchestrator | 2026-04-09 00:50:08.068083 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-09 00:50:08.068087 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.395) 0:00:42.280 ******** 2026-04-09 00:50:08.068091 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.068095 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.068098 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.068102 | orchestrator | 2026-04-09 00:50:08.068106 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-09 00:50:08.068110 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.521) 0:00:42.801 ******** 2026-04-09 00:50:08.068114 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068118 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.068122 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.068130 | orchestrator | 2026-04-09 00:50:08.068134 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-09 00:50:08.068137 | orchestrator | Thursday 09 April 2026 00:46:30 +0000 (0:00:02.693) 0:00:45.495 ******** 2026-04-09 00:50:08.068141 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.068145 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.068149 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.068153 | orchestrator | 2026-04-09 00:50:08.068157 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-09 00:50:08.068160 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:02.268) 0:00:47.764 ******** 2026-04-09 00:50:08.068164 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.068168 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.068172 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.068176 | orchestrator | 2026-04-09 00:50:08.068180 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-09 00:50:08.068187 | orchestrator | Thursday 09 April 2026 00:46:33 +0000 (0:00:00.554) 0:00:48.318 ******** 2026-04-09 00:50:08.068191 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:50:08.068195 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:50:08.068200 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:50:08.068204 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:50:08.068601 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:50:08.068616 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:50:08.068620 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:50:08.068624 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:50:08.068629 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:50:08.068632 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:50:08.068637 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:50:08.068640 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:50:08.068644 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:50:08.068648 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:50:08.068652 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:50:08.068656 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.068660 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.068664 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.068668 | orchestrator | 2026-04-09 00:50:08.068672 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-09 00:50:08.068676 | orchestrator | Thursday 09 April 2026 00:47:27 +0000 (0:00:53.690) 0:01:42.008 ******** 2026-04-09 00:50:08.068680 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.068684 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.068688 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.068692 | orchestrator | 2026-04-09 00:50:08.068696 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-09 00:50:08.068706 | orchestrator | Thursday 09 April 2026 00:47:27 +0000 (0:00:00.436) 0:01:42.445 ******** 2026-04-09 00:50:08.068710 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.068713 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068717 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.068721 | orchestrator | 2026-04-09 00:50:08.068725 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-09 00:50:08.068729 | orchestrator | Thursday 09 April 2026 00:47:28 +0000 (0:00:01.022) 0:01:43.468 ******** 2026-04-09 00:50:08.068733 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068737 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.068741 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.068744 | orchestrator | 2026-04-09 00:50:08.068749 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-09 00:50:08.068756 | orchestrator | Thursday 09 April 2026 00:47:29 +0000 (0:00:01.296) 0:01:44.765 ******** 2026-04-09 00:50:08.068762 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.068767 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.068778 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068783 | orchestrator | 2026-04-09 00:50:08.068790 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-09 00:50:08.068796 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:24.394) 0:02:09.159 ******** 2026-04-09 00:50:08.068810 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.068815 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.068821 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.068827 | orchestrator | 2026-04-09 00:50:08.068833 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-09 00:50:08.068839 | orchestrator | Thursday 09 April 2026 00:47:55 +0000 (0:00:00.751) 0:02:09.911 ******** 2026-04-09 00:50:08.068845 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.068851 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.068857 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.068863 | orchestrator | 2026-04-09 00:50:08.068868 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-09 00:50:08.068874 | orchestrator | Thursday 09 April 2026 00:47:55 +0000 (0:00:00.958) 0:02:10.870 ******** 2026-04-09 00:50:08.068880 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068886 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.068892 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.068898 | orchestrator | 2026-04-09 00:50:08.068905 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-09 00:50:08.068910 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:00.681) 0:02:11.551 ******** 2026-04-09 00:50:08.068913 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.068917 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.068921 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.068925 | orchestrator | 2026-04-09 00:50:08.068929 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-09 00:50:08.068937 | orchestrator | Thursday 09 April 2026 00:47:57 +0000 (0:00:00.604) 0:02:12.156 ******** 2026-04-09 00:50:08.068941 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.068945 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.068949 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.068952 | orchestrator | 2026-04-09 00:50:08.068956 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-09 00:50:08.068960 | orchestrator | Thursday 09 April 2026 00:47:57 +0000 (0:00:00.284) 0:02:12.440 ******** 2026-04-09 00:50:08.068964 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068968 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.068972 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.068976 | orchestrator | 2026-04-09 00:50:08.068980 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-09 00:50:08.068984 | orchestrator | Thursday 09 April 2026 00:47:58 +0000 (0:00:00.745) 0:02:13.186 ******** 2026-04-09 00:50:08.068988 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.068991 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.068995 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.068999 | orchestrator | 2026-04-09 00:50:08.069003 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-09 00:50:08.069007 | orchestrator | Thursday 09 April 2026 00:47:58 +0000 (0:00:00.583) 0:02:13.770 ******** 2026-04-09 00:50:08.069010 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.069014 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.069018 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.069022 | orchestrator | 2026-04-09 00:50:08.069026 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-09 00:50:08.069030 | orchestrator | Thursday 09 April 2026 00:47:59 +0000 (0:00:00.910) 0:02:14.680 ******** 2026-04-09 00:50:08.069033 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.069037 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.069041 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.069045 | orchestrator | 2026-04-09 00:50:08.069048 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-09 00:50:08.069052 | orchestrator | Thursday 09 April 2026 00:48:00 +0000 (0:00:00.853) 0:02:15.534 ******** 2026-04-09 00:50:08.069056 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.069064 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.069068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.069072 | orchestrator | 2026-04-09 00:50:08.069076 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-09 00:50:08.069080 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:00.519) 0:02:16.053 ******** 2026-04-09 00:50:08.069083 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.069087 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.069091 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.069095 | orchestrator | 2026-04-09 00:50:08.069099 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-09 00:50:08.069102 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:00.314) 0:02:16.367 ******** 2026-04-09 00:50:08.069106 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.069110 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.069114 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.069118 | orchestrator | 2026-04-09 00:50:08.069122 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-09 00:50:08.069125 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:00.663) 0:02:17.031 ******** 2026-04-09 00:50:08.069129 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.069138 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.069142 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.069145 | orchestrator | 2026-04-09 00:50:08.069150 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-09 00:50:08.069154 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:00.713) 0:02:17.745 ******** 2026-04-09 00:50:08.069157 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:50:08.069162 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:50:08.069166 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:50:08.069170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:50:08.069174 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:50:08.069177 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:50:08.069181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:50:08.069185 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:50:08.069189 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:50:08.069193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-09 00:50:08.069197 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:50:08.069201 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:50:08.069205 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-09 00:50:08.069209 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:50:08.069215 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:50:08.069221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:50:08.069230 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:50:08.069237 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:50:08.069248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:50:08.069254 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:50:08.069260 | orchestrator | 2026-04-09 00:50:08.069265 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-09 00:50:08.069272 | orchestrator | 2026-04-09 00:50:08.069278 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-09 00:50:08.069284 | orchestrator | Thursday 09 April 2026 00:48:05 +0000 (0:00:03.000) 0:02:20.745 ******** 2026-04-09 00:50:08.069291 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.069297 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.069303 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.069310 | orchestrator | 2026-04-09 00:50:08.069315 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-09 00:50:08.069322 | orchestrator | Thursday 09 April 2026 00:48:06 +0000 (0:00:00.278) 0:02:21.024 ******** 2026-04-09 00:50:08.069327 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.069331 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.069335 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.069339 | orchestrator | 2026-04-09 00:50:08.069343 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-09 00:50:08.069347 | orchestrator | Thursday 09 April 2026 00:48:06 +0000 (0:00:00.578) 0:02:21.603 ******** 2026-04-09 00:50:08.069350 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.069354 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.069358 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.069362 | orchestrator | 2026-04-09 00:50:08.069366 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-09 00:50:08.069370 | orchestrator | Thursday 09 April 2026 00:48:07 +0000 (0:00:00.400) 0:02:22.003 ******** 2026-04-09 00:50:08.069374 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:50:08.069378 | orchestrator | 2026-04-09 00:50:08.069381 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-09 00:50:08.069385 | orchestrator | Thursday 09 April 2026 00:48:07 +0000 (0:00:00.440) 0:02:22.444 ******** 2026-04-09 00:50:08.069389 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.069393 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.069397 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.069401 | orchestrator | 2026-04-09 00:50:08.069405 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-09 00:50:08.069408 | orchestrator | Thursday 09 April 2026 00:48:07 +0000 (0:00:00.273) 0:02:22.717 ******** 2026-04-09 00:50:08.069451 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.069455 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.069459 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.069463 | orchestrator | 2026-04-09 00:50:08.069467 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-09 00:50:08.069475 | orchestrator | Thursday 09 April 2026 00:48:08 +0000 (0:00:00.580) 0:02:23.298 ******** 2026-04-09 00:50:08.069479 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.069483 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.069487 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.069491 | orchestrator | 2026-04-09 00:50:08.069495 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-09 00:50:08.069499 | orchestrator | Thursday 09 April 2026 00:48:08 +0000 (0:00:00.390) 0:02:23.689 ******** 2026-04-09 00:50:08.069502 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.069506 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.069510 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.069514 | orchestrator | 2026-04-09 00:50:08.069518 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-09 00:50:08.069522 | orchestrator | Thursday 09 April 2026 00:48:09 +0000 (0:00:00.640) 0:02:24.330 ******** 2026-04-09 00:50:08.069531 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.069535 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.069539 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.069542 | orchestrator | 2026-04-09 00:50:08.069546 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-09 00:50:08.069550 | orchestrator | Thursday 09 April 2026 00:48:10 +0000 (0:00:01.082) 0:02:25.412 ******** 2026-04-09 00:50:08.069554 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.069558 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.069562 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.069566 | orchestrator | 2026-04-09 00:50:08.069570 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-09 00:50:08.069573 | orchestrator | Thursday 09 April 2026 00:48:12 +0000 (0:00:01.815) 0:02:27.228 ******** 2026-04-09 00:50:08.069577 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.069581 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.069585 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.069589 | orchestrator | 2026-04-09 00:50:08.069593 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 00:50:08.069596 | orchestrator | 2026-04-09 00:50:08.069600 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 00:50:08.069604 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:10.841) 0:02:38.070 ******** 2026-04-09 00:50:08.069608 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.069612 | orchestrator | 2026-04-09 00:50:08.069616 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 00:50:08.069619 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:00.777) 0:02:38.847 ******** 2026-04-09 00:50:08.069623 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069627 | orchestrator | 2026-04-09 00:50:08.069631 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:50:08.069638 | orchestrator | Thursday 09 April 2026 00:48:24 +0000 (0:00:00.419) 0:02:39.267 ******** 2026-04-09 00:50:08.069643 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:50:08.069647 | orchestrator | 2026-04-09 00:50:08.069651 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:50:08.069654 | orchestrator | Thursday 09 April 2026 00:48:24 +0000 (0:00:00.473) 0:02:39.740 ******** 2026-04-09 00:50:08.069658 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069662 | orchestrator | 2026-04-09 00:50:08.069666 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 00:50:08.069670 | orchestrator | Thursday 09 April 2026 00:48:25 +0000 (0:00:01.139) 0:02:40.880 ******** 2026-04-09 00:50:08.069673 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069677 | orchestrator | 2026-04-09 00:50:08.069681 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 00:50:08.069685 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.491) 0:02:41.371 ******** 2026-04-09 00:50:08.069689 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:08.069693 | orchestrator | 2026-04-09 00:50:08.069697 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 00:50:08.069701 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:01.379) 0:02:42.750 ******** 2026-04-09 00:50:08.069705 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:08.069709 | orchestrator | 2026-04-09 00:50:08.069712 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 00:50:08.069716 | orchestrator | Thursday 09 April 2026 00:48:28 +0000 (0:00:00.788) 0:02:43.538 ******** 2026-04-09 00:50:08.069720 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069724 | orchestrator | 2026-04-09 00:50:08.069728 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 00:50:08.069732 | orchestrator | Thursday 09 April 2026 00:48:28 +0000 (0:00:00.339) 0:02:43.878 ******** 2026-04-09 00:50:08.069740 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069744 | orchestrator | 2026-04-09 00:50:08.069748 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-09 00:50:08.069751 | orchestrator | 2026-04-09 00:50:08.069755 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-09 00:50:08.069759 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:00.361) 0:02:44.239 ******** 2026-04-09 00:50:08.069763 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.069767 | orchestrator | 2026-04-09 00:50:08.069771 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-09 00:50:08.069774 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:00.126) 0:02:44.366 ******** 2026-04-09 00:50:08.069778 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:50:08.069782 | orchestrator | 2026-04-09 00:50:08.069786 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-09 00:50:08.069790 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:00.215) 0:02:44.581 ******** 2026-04-09 00:50:08.069793 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.069797 | orchestrator | 2026-04-09 00:50:08.069801 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-09 00:50:08.069805 | orchestrator | Thursday 09 April 2026 00:48:30 +0000 (0:00:00.939) 0:02:45.521 ******** 2026-04-09 00:50:08.069812 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.069816 | orchestrator | 2026-04-09 00:50:08.069820 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-09 00:50:08.069824 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:01.281) 0:02:46.803 ******** 2026-04-09 00:50:08.069828 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069832 | orchestrator | 2026-04-09 00:50:08.069835 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-09 00:50:08.069839 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:00.753) 0:02:47.556 ******** 2026-04-09 00:50:08.069844 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.069847 | orchestrator | 2026-04-09 00:50:08.069851 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-09 00:50:08.069855 | orchestrator | Thursday 09 April 2026 00:48:33 +0000 (0:00:00.389) 0:02:47.945 ******** 2026-04-09 00:50:08.069859 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069863 | orchestrator | 2026-04-09 00:50:08.069867 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-09 00:50:08.069871 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:06.406) 0:02:54.351 ******** 2026-04-09 00:50:08.069874 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.069878 | orchestrator | 2026-04-09 00:50:08.069882 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-09 00:50:08.069886 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:10.218) 0:03:04.570 ******** 2026-04-09 00:50:08.069890 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.069894 | orchestrator | 2026-04-09 00:50:08.069897 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-09 00:50:08.069901 | orchestrator | 2026-04-09 00:50:08.069905 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-09 00:50:08.069909 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.483) 0:03:05.053 ******** 2026-04-09 00:50:08.069913 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.069917 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.069921 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.069925 | orchestrator | 2026-04-09 00:50:08.069928 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-09 00:50:08.069933 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.390) 0:03:05.444 ******** 2026-04-09 00:50:08.069939 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.069946 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.069952 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.069965 | orchestrator | 2026-04-09 00:50:08.069971 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-09 00:50:08.069978 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.282) 0:03:05.727 ******** 2026-04-09 00:50:08.069988 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:08.069994 | orchestrator | 2026-04-09 00:50:08.070001 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-09 00:50:08.070007 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:00.441) 0:03:06.168 ******** 2026-04-09 00:50:08.070059 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.070065 | orchestrator | 2026-04-09 00:50:08.070069 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-09 00:50:08.070073 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:00.687) 0:03:06.856 ******** 2026-04-09 00:50:08.070077 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.070081 | orchestrator | 2026-04-09 00:50:08.070085 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-09 00:50:08.070089 | orchestrator | Thursday 09 April 2026 00:48:52 +0000 (0:00:00.798) 0:03:07.654 ******** 2026-04-09 00:50:08.070092 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070096 | orchestrator | 2026-04-09 00:50:08.070100 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-09 00:50:08.070104 | orchestrator | Thursday 09 April 2026 00:48:52 +0000 (0:00:00.214) 0:03:07.869 ******** 2026-04-09 00:50:08.070108 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.070112 | orchestrator | 2026-04-09 00:50:08.070116 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-09 00:50:08.070120 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:00.955) 0:03:08.825 ******** 2026-04-09 00:50:08.070123 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070127 | orchestrator | 2026-04-09 00:50:08.070132 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-09 00:50:08.070135 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.119) 0:03:08.944 ******** 2026-04-09 00:50:08.070139 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070143 | orchestrator | 2026-04-09 00:50:08.070147 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-09 00:50:08.070150 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.143) 0:03:09.088 ******** 2026-04-09 00:50:08.070154 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070158 | orchestrator | 2026-04-09 00:50:08.070162 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-09 00:50:08.070166 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.162) 0:03:09.250 ******** 2026-04-09 00:50:08.070170 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070173 | orchestrator | 2026-04-09 00:50:08.070177 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-09 00:50:08.070181 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.103) 0:03:09.354 ******** 2026-04-09 00:50:08.070185 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.070189 | orchestrator | 2026-04-09 00:50:08.070193 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-09 00:50:08.070197 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:04.656) 0:03:14.010 ******** 2026-04-09 00:50:08.070201 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-09 00:50:08.070209 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-09 00:50:08.070214 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-09 00:50:08.070217 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-09 00:50:08.070222 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-09 00:50:08.070230 | orchestrator | 2026-04-09 00:50:08.070234 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-09 00:50:08.070238 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:42.391) 0:03:56.401 ******** 2026-04-09 00:50:08.070242 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.070246 | orchestrator | 2026-04-09 00:50:08.070250 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-09 00:50:08.070254 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:01.166) 0:03:57.567 ******** 2026-04-09 00:50:08.070258 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.070261 | orchestrator | 2026-04-09 00:50:08.070265 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-09 00:50:08.070269 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:01.407) 0:03:58.975 ******** 2026-04-09 00:50:08.070274 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.070280 | orchestrator | 2026-04-09 00:50:08.070287 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-09 00:50:08.070294 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:01.006) 0:03:59.982 ******** 2026-04-09 00:50:08.070301 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070307 | orchestrator | 2026-04-09 00:50:08.070313 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-09 00:50:08.070320 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.112) 0:04:00.094 ******** 2026-04-09 00:50:08.070327 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-09 00:50:08.070334 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-09 00:50:08.070341 | orchestrator | 2026-04-09 00:50:08.070347 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-09 00:50:08.070354 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:01.777) 0:04:01.872 ******** 2026-04-09 00:50:08.070358 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070362 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.070366 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.070370 | orchestrator | 2026-04-09 00:50:08.070374 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-09 00:50:08.070381 | orchestrator | Thursday 09 April 2026 00:49:47 +0000 (0:00:00.274) 0:04:02.146 ******** 2026-04-09 00:50:08.070385 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.070389 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.070393 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.070397 | orchestrator | 2026-04-09 00:50:08.070401 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-09 00:50:08.070404 | orchestrator | 2026-04-09 00:50:08.070408 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-09 00:50:08.070443 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.865) 0:04:03.011 ******** 2026-04-09 00:50:08.070447 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.070451 | orchestrator | 2026-04-09 00:50:08.070454 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-09 00:50:08.070458 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.110) 0:04:03.122 ******** 2026-04-09 00:50:08.070462 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:50:08.070466 | orchestrator | 2026-04-09 00:50:08.070470 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-09 00:50:08.070474 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.334) 0:04:03.456 ******** 2026-04-09 00:50:08.070478 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.070481 | orchestrator | 2026-04-09 00:50:08.070485 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-09 00:50:08.070489 | orchestrator | 2026-04-09 00:50:08.070493 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-09 00:50:08.070502 | orchestrator | Thursday 09 April 2026 00:49:53 +0000 (0:00:05.011) 0:04:08.467 ******** 2026-04-09 00:50:08.070505 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.070509 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.070513 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.070517 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.070521 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.070525 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.070529 | orchestrator | 2026-04-09 00:50:08.070532 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-09 00:50:08.070536 | orchestrator | Thursday 09 April 2026 00:49:54 +0000 (0:00:00.439) 0:04:08.907 ******** 2026-04-09 00:50:08.070540 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:50:08.070544 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:50:08.070548 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:50:08.070552 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:50:08.070556 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:50:08.070560 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:50:08.070564 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:50:08.070568 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:50:08.070576 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:50:08.070581 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:50:08.070584 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:50:08.070588 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:50:08.070592 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:50:08.070596 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:50:08.070600 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:50:08.070604 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:50:08.070607 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:50:08.070611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:50:08.070615 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:50:08.070619 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:50:08.070623 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:50:08.070627 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:50:08.070631 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:50:08.070634 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:50:08.070638 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:50:08.070642 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:50:08.070646 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:50:08.070650 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:50:08.070662 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:50:08.070666 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:50:08.070669 | orchestrator | 2026-04-09 00:50:08.070673 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-09 00:50:08.070677 | orchestrator | Thursday 09 April 2026 00:50:06 +0000 (0:00:12.156) 0:04:21.063 ******** 2026-04-09 00:50:08.070681 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.070685 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.070689 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.070693 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070696 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.070700 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.070704 | orchestrator | 2026-04-09 00:50:08.070708 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-09 00:50:08.070712 | orchestrator | Thursday 09 April 2026 00:50:06 +0000 (0:00:00.451) 0:04:21.515 ******** 2026-04-09 00:50:08.070716 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.070720 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.070724 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.070728 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.070732 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.070738 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.070744 | orchestrator | 2026-04-09 00:50:08.070754 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:08.070763 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:50:08.070771 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 00:50:08.070778 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 00:50:08.070784 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 00:50:08.070790 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:50:08.070796 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:50:08.070801 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:50:08.070807 | orchestrator | 2026-04-09 00:50:08.070813 | orchestrator | 2026-04-09 00:50:08.070819 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:08.070830 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:00.444) 0:04:21.959 ******** 2026-04-09 00:50:08.070837 | orchestrator | =============================================================================== 2026-04-09 00:50:08.070844 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.69s 2026-04-09 00:50:08.070850 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.39s 2026-04-09 00:50:08.070856 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.39s 2026-04-09 00:50:08.070862 | orchestrator | Manage labels ---------------------------------------------------------- 12.16s 2026-04-09 00:50:08.070866 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.84s 2026-04-09 00:50:08.070870 | orchestrator | kubectl : Install required packages ------------------------------------ 10.22s 2026-04-09 00:50:08.070879 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.41s 2026-04-09 00:50:08.070882 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.85s 2026-04-09 00:50:08.070886 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.01s 2026-04-09 00:50:08.070890 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.66s 2026-04-09 00:50:08.070894 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.00s 2026-04-09 00:50:08.070898 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.90s 2026-04-09 00:50:08.070902 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.69s 2026-04-09 00:50:08.070906 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.28s 2026-04-09 00:50:08.070909 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.27s 2026-04-09 00:50:08.070913 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.18s 2026-04-09 00:50:08.070917 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.86s 2026-04-09 00:50:08.070921 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.83s 2026-04-09 00:50:08.070925 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.82s 2026-04-09 00:50:08.070929 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.80s 2026-04-09 00:50:08.070936 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:08.070941 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:08.070945 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:08.070948 | orchestrator | 2026-04-09 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:11.114043 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:11.114707 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:11.115983 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task bd42de20-1196-4ff5-8ba8-0f658dfc890a is in state STARTED 2026-04-09 00:50:11.116617 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:11.119049 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task 5d64ecf2-0c7d-4227-bc5e-fe78e6a83d30 is in state STARTED 2026-04-09 00:50:11.120084 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:11.120708 | orchestrator | 2026-04-09 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:14.228247 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:14.228338 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:14.228354 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task bd42de20-1196-4ff5-8ba8-0f658dfc890a is in state STARTED 2026-04-09 00:50:14.228365 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:14.228376 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task 5d64ecf2-0c7d-4227-bc5e-fe78e6a83d30 is in state STARTED 2026-04-09 00:50:14.228386 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:14.228483 | orchestrator | 2026-04-09 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:17.284160 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:17.284336 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:17.285310 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task bd42de20-1196-4ff5-8ba8-0f658dfc890a is in state STARTED 2026-04-09 00:50:17.286141 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:17.286573 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task 5d64ecf2-0c7d-4227-bc5e-fe78e6a83d30 is in state SUCCESS 2026-04-09 00:50:17.287322 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:17.287362 | orchestrator | 2026-04-09 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:20.337533 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:20.339219 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:20.340531 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task bd42de20-1196-4ff5-8ba8-0f658dfc890a is in state SUCCESS 2026-04-09 00:50:20.342098 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:20.343286 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:20.343373 | orchestrator | 2026-04-09 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:23.381660 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:23.382208 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:23.383145 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:23.385350 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:23.385693 | orchestrator | 2026-04-09 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:26.413699 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:26.414644 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:26.417214 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:26.418108 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:26.418138 | orchestrator | 2026-04-09 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:29.444124 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:29.444212 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:29.445003 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:29.446151 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:29.446220 | orchestrator | 2026-04-09 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:32.475132 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:32.475260 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:32.475949 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:32.476622 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:32.476665 | orchestrator | 2026-04-09 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:35.503281 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:35.503841 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:35.505094 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:35.505227 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:35.506184 | orchestrator | 2026-04-09 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:38.533243 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:38.534791 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:38.535261 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:38.536126 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:38.536260 | orchestrator | 2026-04-09 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:41.562256 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:41.562441 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:41.562981 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:41.563601 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:41.563631 | orchestrator | 2026-04-09 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:44.597286 | orchestrator | 2026-04-09 00:50:44 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:44.599761 | orchestrator | 2026-04-09 00:50:44 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:44.600369 | orchestrator | 2026-04-09 00:50:44 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:44.602837 | orchestrator | 2026-04-09 00:50:44 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:44.602899 | orchestrator | 2026-04-09 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:47.631504 | orchestrator | 2026-04-09 00:50:47 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:47.634255 | orchestrator | 2026-04-09 00:50:47 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:47.634359 | orchestrator | 2026-04-09 00:50:47 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:47.634400 | orchestrator | 2026-04-09 00:50:47 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:47.634411 | orchestrator | 2026-04-09 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:50.728117 | orchestrator | 2026-04-09 00:50:50 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:50.728316 | orchestrator | 2026-04-09 00:50:50 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:50.729158 | orchestrator | 2026-04-09 00:50:50 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:50.729924 | orchestrator | 2026-04-09 00:50:50 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:50.729960 | orchestrator | 2026-04-09 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:53.758611 | orchestrator | 2026-04-09 00:50:53 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:53.758938 | orchestrator | 2026-04-09 00:50:53 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:53.760454 | orchestrator | 2026-04-09 00:50:53 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:53.761001 | orchestrator | 2026-04-09 00:50:53 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:53.761055 | orchestrator | 2026-04-09 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:56.909129 | orchestrator | 2026-04-09 00:50:56 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:56.911752 | orchestrator | 2026-04-09 00:50:56 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:56.911812 | orchestrator | 2026-04-09 00:50:56 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:56.913949 | orchestrator | 2026-04-09 00:50:56 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:56.914003 | orchestrator | 2026-04-09 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:59.938483 | orchestrator | 2026-04-09 00:50:59 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:50:59.938630 | orchestrator | 2026-04-09 00:50:59 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:50:59.939205 | orchestrator | 2026-04-09 00:50:59 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:50:59.939824 | orchestrator | 2026-04-09 00:50:59 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:50:59.939896 | orchestrator | 2026-04-09 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:02.961596 | orchestrator | 2026-04-09 00:51:02 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:02.961800 | orchestrator | 2026-04-09 00:51:02 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:02.962317 | orchestrator | 2026-04-09 00:51:02 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:02.963885 | orchestrator | 2026-04-09 00:51:02 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:02.963909 | orchestrator | 2026-04-09 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:06.023395 | orchestrator | 2026-04-09 00:51:05 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:06.023542 | orchestrator | 2026-04-09 00:51:05 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:06.023562 | orchestrator | 2026-04-09 00:51:05 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:06.023578 | orchestrator | 2026-04-09 00:51:05 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:06.023593 | orchestrator | 2026-04-09 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:09.022002 | orchestrator | 2026-04-09 00:51:09 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:09.022129 | orchestrator | 2026-04-09 00:51:09 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:09.022395 | orchestrator | 2026-04-09 00:51:09 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:09.022945 | orchestrator | 2026-04-09 00:51:09 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:09.022981 | orchestrator | 2026-04-09 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:12.046182 | orchestrator | 2026-04-09 00:51:12 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:12.046253 | orchestrator | 2026-04-09 00:51:12 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:12.046259 | orchestrator | 2026-04-09 00:51:12 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:12.046305 | orchestrator | 2026-04-09 00:51:12 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:12.046311 | orchestrator | 2026-04-09 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:15.070714 | orchestrator | 2026-04-09 00:51:15 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:15.073518 | orchestrator | 2026-04-09 00:51:15 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:15.075614 | orchestrator | 2026-04-09 00:51:15 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:15.077737 | orchestrator | 2026-04-09 00:51:15 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:15.077836 | orchestrator | 2026-04-09 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:18.122814 | orchestrator | 2026-04-09 00:51:18 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:18.122905 | orchestrator | 2026-04-09 00:51:18 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:18.122915 | orchestrator | 2026-04-09 00:51:18 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:18.122923 | orchestrator | 2026-04-09 00:51:18 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:18.122928 | orchestrator | 2026-04-09 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:21.141135 | orchestrator | 2026-04-09 00:51:21 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:21.141402 | orchestrator | 2026-04-09 00:51:21 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:21.142162 | orchestrator | 2026-04-09 00:51:21 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:21.142835 | orchestrator | 2026-04-09 00:51:21 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:21.142860 | orchestrator | 2026-04-09 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:24.167639 | orchestrator | 2026-04-09 00:51:24 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:24.170900 | orchestrator | 2026-04-09 00:51:24 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:24.174130 | orchestrator | 2026-04-09 00:51:24 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:24.175834 | orchestrator | 2026-04-09 00:51:24 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:24.176123 | orchestrator | 2026-04-09 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:27.214947 | orchestrator | 2026-04-09 00:51:27 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:27.216188 | orchestrator | 2026-04-09 00:51:27 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:27.217656 | orchestrator | 2026-04-09 00:51:27 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:27.218863 | orchestrator | 2026-04-09 00:51:27 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:27.218967 | orchestrator | 2026-04-09 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:30.259665 | orchestrator | 2026-04-09 00:51:30 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:30.261626 | orchestrator | 2026-04-09 00:51:30 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:30.263555 | orchestrator | 2026-04-09 00:51:30 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:30.265488 | orchestrator | 2026-04-09 00:51:30 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:30.265531 | orchestrator | 2026-04-09 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:33.306534 | orchestrator | 2026-04-09 00:51:33 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:33.315802 | orchestrator | 2026-04-09 00:51:33 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:33.316666 | orchestrator | 2026-04-09 00:51:33 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:33.317342 | orchestrator | 2026-04-09 00:51:33 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:33.317481 | orchestrator | 2026-04-09 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:36.347964 | orchestrator | 2026-04-09 00:51:36 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:36.348579 | orchestrator | 2026-04-09 00:51:36 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:36.349666 | orchestrator | 2026-04-09 00:51:36 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:36.351782 | orchestrator | 2026-04-09 00:51:36 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:36.351830 | orchestrator | 2026-04-09 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:39.372117 | orchestrator | 2026-04-09 00:51:39 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:39.372602 | orchestrator | 2026-04-09 00:51:39 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:39.373266 | orchestrator | 2026-04-09 00:51:39 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:39.373882 | orchestrator | 2026-04-09 00:51:39 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:39.374098 | orchestrator | 2026-04-09 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:42.412150 | orchestrator | 2026-04-09 00:51:42 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:42.412785 | orchestrator | 2026-04-09 00:51:42 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:42.413747 | orchestrator | 2026-04-09 00:51:42 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:42.414895 | orchestrator | 2026-04-09 00:51:42 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:42.414938 | orchestrator | 2026-04-09 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:45.452863 | orchestrator | 2026-04-09 00:51:45 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:45.457430 | orchestrator | 2026-04-09 00:51:45 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:45.457984 | orchestrator | 2026-04-09 00:51:45 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:45.460275 | orchestrator | 2026-04-09 00:51:45 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:45.460539 | orchestrator | 2026-04-09 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:48.492194 | orchestrator | 2026-04-09 00:51:48 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:48.492329 | orchestrator | 2026-04-09 00:51:48 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state STARTED 2026-04-09 00:51:48.492344 | orchestrator | 2026-04-09 00:51:48 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:48.492356 | orchestrator | 2026-04-09 00:51:48 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:48.492367 | orchestrator | 2026-04-09 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:51.518577 | orchestrator | 2026-04-09 00:51:51 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:51.521384 | orchestrator | 2026-04-09 00:51:51.521450 | orchestrator | 2026-04-09 00:51:51.521456 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-09 00:51:51.521461 | orchestrator | 2026-04-09 00:51:51.521478 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:51:51.521485 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.250) 0:00:00.250 ******** 2026-04-09 00:51:51.521492 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:51:51.521499 | orchestrator | 2026-04-09 00:51:51.521505 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:51:51.521511 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:01.135) 0:00:01.386 ******** 2026-04-09 00:51:51.521517 | orchestrator | changed: [testbed-manager] 2026-04-09 00:51:51.521524 | orchestrator | 2026-04-09 00:51:51.521531 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-09 00:51:51.521537 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:01.862) 0:00:03.249 ******** 2026-04-09 00:51:51.521543 | orchestrator | changed: [testbed-manager] 2026-04-09 00:51:51.521549 | orchestrator | 2026-04-09 00:51:51.521554 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:51:51.521560 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:51:51.521590 | orchestrator | 2026-04-09 00:51:51.521597 | orchestrator | 2026-04-09 00:51:51.521606 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:51:51.521614 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:00.626) 0:00:03.875 ******** 2026-04-09 00:51:51.521621 | orchestrator | =============================================================================== 2026-04-09 00:51:51.521627 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.86s 2026-04-09 00:51:51.521632 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.14s 2026-04-09 00:51:51.521637 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.63s 2026-04-09 00:51:51.521643 | orchestrator | 2026-04-09 00:51:51.521648 | orchestrator | 2026-04-09 00:51:51.521655 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 00:51:51.521661 | orchestrator | 2026-04-09 00:51:51.521667 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 00:51:51.521672 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.355) 0:00:00.355 ******** 2026-04-09 00:51:51.521678 | orchestrator | ok: [testbed-manager] 2026-04-09 00:51:51.521685 | orchestrator | 2026-04-09 00:51:51.521690 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 00:51:51.521697 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:00.815) 0:00:01.170 ******** 2026-04-09 00:51:51.521703 | orchestrator | ok: [testbed-manager] 2026-04-09 00:51:51.521708 | orchestrator | 2026-04-09 00:51:51.521714 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:51:51.521720 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:00.796) 0:00:01.967 ******** 2026-04-09 00:51:51.521725 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:51:51.521731 | orchestrator | 2026-04-09 00:51:51.521737 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:51:51.521743 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:01.381) 0:00:03.348 ******** 2026-04-09 00:51:51.521749 | orchestrator | changed: [testbed-manager] 2026-04-09 00:51:51.521756 | orchestrator | 2026-04-09 00:51:51.521762 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 00:51:51.521861 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:01.130) 0:00:04.479 ******** 2026-04-09 00:51:51.521868 | orchestrator | changed: [testbed-manager] 2026-04-09 00:51:51.521872 | orchestrator | 2026-04-09 00:51:51.521876 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 00:51:51.521879 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:00.493) 0:00:04.972 ******** 2026-04-09 00:51:51.521884 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:51:51.521887 | orchestrator | 2026-04-09 00:51:51.521891 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 00:51:51.521895 | orchestrator | Thursday 09 April 2026 00:50:16 +0000 (0:00:01.335) 0:00:06.308 ******** 2026-04-09 00:51:51.521899 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:51:51.521903 | orchestrator | 2026-04-09 00:51:51.521907 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 00:51:51.521910 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.664) 0:00:06.972 ******** 2026-04-09 00:51:51.521914 | orchestrator | ok: [testbed-manager] 2026-04-09 00:51:51.521918 | orchestrator | 2026-04-09 00:51:51.521922 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 00:51:51.521926 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.310) 0:00:07.283 ******** 2026-04-09 00:51:51.521929 | orchestrator | ok: [testbed-manager] 2026-04-09 00:51:51.521933 | orchestrator | 2026-04-09 00:51:51.521937 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:51:51.521941 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:51:51.521951 | orchestrator | 2026-04-09 00:51:51.521955 | orchestrator | 2026-04-09 00:51:51.521959 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:51:51.521963 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.256) 0:00:07.539 ******** 2026-04-09 00:51:51.521967 | orchestrator | =============================================================================== 2026-04-09 00:51:51.521970 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.38s 2026-04-09 00:51:51.521974 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.34s 2026-04-09 00:51:51.521978 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.13s 2026-04-09 00:51:51.522047 | orchestrator | Get home directory of operator user ------------------------------------- 0.82s 2026-04-09 00:51:51.522054 | orchestrator | Create .kube directory -------------------------------------------------- 0.80s 2026-04-09 00:51:51.522058 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.66s 2026-04-09 00:51:51.522062 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-04-09 00:51:51.522066 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.31s 2026-04-09 00:51:51.522071 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2026-04-09 00:51:51.522075 | orchestrator | 2026-04-09 00:51:51.522080 | orchestrator | 2026-04-09 00:51:51 | INFO  | Task c992dcba-0650-4d3e-8ac6-3d9d0dd640aa is in state SUCCESS 2026-04-09 00:51:51.523342 | orchestrator | 2026-04-09 00:51:51.523403 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-09 00:51:51.523412 | orchestrator | 2026-04-09 00:51:51.523421 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-09 00:51:51.523429 | orchestrator | Thursday 09 April 2026 00:48:43 +0000 (0:00:00.226) 0:00:00.226 ******** 2026-04-09 00:51:51.523436 | orchestrator | ok: [localhost] => { 2026-04-09 00:51:51.523446 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-09 00:51:51.523454 | orchestrator | } 2026-04-09 00:51:51.523461 | orchestrator | 2026-04-09 00:51:51.523469 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-09 00:51:51.523476 | orchestrator | Thursday 09 April 2026 00:48:44 +0000 (0:00:00.152) 0:00:00.379 ******** 2026-04-09 00:51:51.523485 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-09 00:51:51.523493 | orchestrator | ...ignoring 2026-04-09 00:51:51.523500 | orchestrator | 2026-04-09 00:51:51.523507 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-09 00:51:51.523514 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:03.125) 0:00:03.505 ******** 2026-04-09 00:51:51.523521 | orchestrator | skipping: [localhost] 2026-04-09 00:51:51.523529 | orchestrator | 2026-04-09 00:51:51.523535 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-09 00:51:51.523543 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.072) 0:00:03.577 ******** 2026-04-09 00:51:51.523551 | orchestrator | ok: [localhost] 2026-04-09 00:51:51.523558 | orchestrator | 2026-04-09 00:51:51.523565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:51:51.523572 | orchestrator | 2026-04-09 00:51:51.523579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:51:51.523588 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.324) 0:00:03.902 ******** 2026-04-09 00:51:51.523595 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:51.523602 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:51.523608 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:51.523615 | orchestrator | 2026-04-09 00:51:51.523622 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:51:51.523629 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.291) 0:00:04.193 ******** 2026-04-09 00:51:51.523658 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-09 00:51:51.523666 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-09 00:51:51.523674 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-09 00:51:51.523681 | orchestrator | 2026-04-09 00:51:51.523688 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-09 00:51:51.523696 | orchestrator | 2026-04-09 00:51:51.523704 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:51:51.523712 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:00.951) 0:00:05.144 ******** 2026-04-09 00:51:51.523720 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:51.523728 | orchestrator | 2026-04-09 00:51:51.523736 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 00:51:51.523757 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:01.757) 0:00:06.901 ******** 2026-04-09 00:51:51.523765 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:51.523772 | orchestrator | 2026-04-09 00:51:51.523780 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-09 00:51:51.523789 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:03.110) 0:00:10.012 ******** 2026-04-09 00:51:51.523796 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.523805 | orchestrator | 2026-04-09 00:51:51.523814 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-09 00:51:51.523822 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:00.311) 0:00:10.323 ******** 2026-04-09 00:51:51.523831 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.523838 | orchestrator | 2026-04-09 00:51:51.523846 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-09 00:51:51.523936 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.383) 0:00:10.706 ******** 2026-04-09 00:51:51.523951 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.523958 | orchestrator | 2026-04-09 00:51:51.523966 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-09 00:51:51.523973 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.329) 0:00:11.036 ******** 2026-04-09 00:51:51.523981 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.523988 | orchestrator | 2026-04-09 00:51:51.523995 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:51:51.524003 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.231) 0:00:11.267 ******** 2026-04-09 00:51:51.524011 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:51.524019 | orchestrator | 2026-04-09 00:51:51.524027 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 00:51:51.524041 | orchestrator | Thursday 09 April 2026 00:48:55 +0000 (0:00:00.545) 0:00:11.813 ******** 2026-04-09 00:51:51.524049 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:51.524056 | orchestrator | 2026-04-09 00:51:51.524064 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-09 00:51:51.524072 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:00.835) 0:00:12.648 ******** 2026-04-09 00:51:51.524081 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.524088 | orchestrator | 2026-04-09 00:51:51.524096 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-09 00:51:51.524105 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:00.522) 0:00:13.171 ******** 2026-04-09 00:51:51.524113 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.524120 | orchestrator | 2026-04-09 00:51:51.524152 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-09 00:51:51.524162 | orchestrator | Thursday 09 April 2026 00:48:57 +0000 (0:00:00.239) 0:00:13.410 ******** 2026-04-09 00:51:51.524175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524247 | orchestrator | 2026-04-09 00:51:51.524255 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-09 00:51:51.524263 | orchestrator | Thursday 09 April 2026 00:48:58 +0000 (0:00:01.137) 0:00:14.548 ******** 2026-04-09 00:51:51.524284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524320 | orchestrator | 2026-04-09 00:51:51.524329 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-09 00:51:51.524337 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:01.741) 0:00:16.289 ******** 2026-04-09 00:51:51.524345 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:51:51.524354 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:51:51.524362 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:51:51.524371 | orchestrator | 2026-04-09 00:51:51.524379 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-09 00:51:51.524387 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:02.285) 0:00:18.574 ******** 2026-04-09 00:51:51.524395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:51:51.524402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:51:51.524410 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:51:51.524419 | orchestrator | 2026-04-09 00:51:51.524428 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-09 00:51:51.524439 | orchestrator | Thursday 09 April 2026 00:49:04 +0000 (0:00:02.147) 0:00:20.721 ******** 2026-04-09 00:51:51.524452 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:51:51.524459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:51:51.524466 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:51:51.524474 | orchestrator | 2026-04-09 00:51:51.524481 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-09 00:51:51.524489 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:01.077) 0:00:21.799 ******** 2026-04-09 00:51:51.524502 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:51:51.524510 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:51:51.524519 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:51:51.524527 | orchestrator | 2026-04-09 00:51:51.524535 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-09 00:51:51.524543 | orchestrator | Thursday 09 April 2026 00:49:06 +0000 (0:00:01.465) 0:00:23.265 ******** 2026-04-09 00:51:51.524551 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:51:51.524559 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:51:51.524567 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:51:51.524575 | orchestrator | 2026-04-09 00:51:51.524583 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-09 00:51:51.524591 | orchestrator | Thursday 09 April 2026 00:49:08 +0000 (0:00:01.156) 0:00:24.421 ******** 2026-04-09 00:51:51.524599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:51:51.524608 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:51:51.524616 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:51:51.524623 | orchestrator | 2026-04-09 00:51:51.524631 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:51:51.524639 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:01.409) 0:00:25.831 ******** 2026-04-09 00:51:51.524648 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:51.524656 | orchestrator | 2026-04-09 00:51:51.524664 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-09 00:51:51.524673 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:00.498) 0:00:26.329 ******** 2026-04-09 00:51:51.524682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524730 | orchestrator | 2026-04-09 00:51:51.524739 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-09 00:51:51.524746 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:01.458) 0:00:27.787 ******** 2026-04-09 00:51:51.524755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.524764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.524779 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.524788 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:51.524805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.524816 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:51.524823 | orchestrator | 2026-04-09 00:51:51.524831 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-09 00:51:51.524840 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:00.323) 0:00:28.111 ******** 2026-04-09 00:51:51.524849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.524858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.524867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.524882 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:51.524890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.524898 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:51.524906 | orchestrator | 2026-04-09 00:51:51.524913 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-09 00:51:51.524926 | orchestrator | Thursday 09 April 2026 00:49:12 +0000 (0:00:00.941) 0:00:29.052 ******** 2026-04-09 00:51:51.524942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:51:51.524976 | orchestrator | 2026-04-09 00:51:51.524984 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-09 00:51:51.524991 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:01.040) 0:00:30.093 ******** 2026-04-09 00:51:51.524999 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:51:51.525007 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:51:51.525015 | orchestrator | } 2026-04-09 00:51:51.525023 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:51:51.525031 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:51:51.525038 | orchestrator | } 2026-04-09 00:51:51.525045 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:51:51.525052 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:51:51.525059 | orchestrator | } 2026-04-09 00:51:51.525067 | orchestrator | 2026-04-09 00:51:51.525074 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:51:51.525081 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.292) 0:00:30.386 ******** 2026-04-09 00:51:51.525105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.525116 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.525126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.525134 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:51.525150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:51:51.525161 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:51.525170 | orchestrator | 2026-04-09 00:51:51.525178 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-09 00:51:51.525186 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.876) 0:00:31.263 ******** 2026-04-09 00:51:51.525195 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:51.525328 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:51.525338 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:51.525347 | orchestrator | 2026-04-09 00:51:51.525356 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-09 00:51:51.525364 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.777) 0:00:32.040 ******** 2026-04-09 00:51:51.525373 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:51.525381 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:51.525390 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:51.525399 | orchestrator | 2026-04-09 00:51:51.525407 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-09 00:51:51.525416 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:07.830) 0:00:39.871 ******** 2026-04-09 00:51:51.525425 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:51.525432 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:51.525441 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:51.525448 | orchestrator | 2026-04-09 00:51:51.525456 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 00:51:51.525463 | orchestrator | 2026-04-09 00:51:51.525472 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 00:51:51.525489 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:00.381) 0:00:40.252 ******** 2026-04-09 00:51:51.525497 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:51.525506 | orchestrator | 2026-04-09 00:51:51.525515 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 00:51:51.525523 | orchestrator | Thursday 09 April 2026 00:49:24 +0000 (0:00:00.622) 0:00:40.875 ******** 2026-04-09 00:51:51.525533 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:51.525542 | orchestrator | 2026-04-09 00:51:51.525551 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 00:51:51.525560 | orchestrator | Thursday 09 April 2026 00:49:24 +0000 (0:00:00.118) 0:00:40.994 ******** 2026-04-09 00:51:51.525570 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:51.525578 | orchestrator | 2026-04-09 00:51:51.525598 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 00:51:51.525607 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:01.523) 0:00:42.517 ******** 2026-04-09 00:51:51.525616 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:51.525625 | orchestrator | 2026-04-09 00:51:51.525633 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 00:51:51.525642 | orchestrator | 2026-04-09 00:51:51.525651 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 00:51:51.525671 | orchestrator | Thursday 09 April 2026 00:51:19 +0000 (0:01:53.753) 0:02:36.270 ******** 2026-04-09 00:51:51.525679 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:51.525687 | orchestrator | 2026-04-09 00:51:51.525695 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 00:51:51.525702 | orchestrator | Thursday 09 April 2026 00:51:20 +0000 (0:00:01.024) 0:02:37.295 ******** 2026-04-09 00:51:51.525711 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:51.525720 | orchestrator | 2026-04-09 00:51:51.525729 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 00:51:51.525736 | orchestrator | Thursday 09 April 2026 00:51:21 +0000 (0:00:00.185) 0:02:37.481 ******** 2026-04-09 00:51:51.525742 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:51.525748 | orchestrator | 2026-04-09 00:51:51.525755 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 00:51:51.525764 | orchestrator | Thursday 09 April 2026 00:51:27 +0000 (0:00:06.606) 0:02:44.088 ******** 2026-04-09 00:51:51.525773 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:51.525781 | orchestrator | 2026-04-09 00:51:51.525790 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 00:51:51.525799 | orchestrator | 2026-04-09 00:51:51.525808 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 00:51:51.525815 | orchestrator | Thursday 09 April 2026 00:51:34 +0000 (0:00:06.714) 0:02:50.802 ******** 2026-04-09 00:51:51.525823 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:51.525830 | orchestrator | 2026-04-09 00:51:51.525838 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 00:51:51.525846 | orchestrator | Thursday 09 April 2026 00:51:35 +0000 (0:00:00.870) 0:02:51.672 ******** 2026-04-09 00:51:51.525854 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:51.525861 | orchestrator | 2026-04-09 00:51:51.525868 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 00:51:51.525876 | orchestrator | Thursday 09 April 2026 00:51:35 +0000 (0:00:00.236) 0:02:51.910 ******** 2026-04-09 00:51:51.525883 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:51.525891 | orchestrator | 2026-04-09 00:51:51.525898 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 00:51:51.525905 | orchestrator | Thursday 09 April 2026 00:51:37 +0000 (0:00:01.958) 0:02:53.868 ******** 2026-04-09 00:51:51.525913 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:51.525921 | orchestrator | 2026-04-09 00:51:51.525928 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-09 00:51:51.525935 | orchestrator | 2026-04-09 00:51:51.525943 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-09 00:51:51.525950 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:10.367) 0:03:04.236 ******** 2026-04-09 00:51:51.525958 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:51.525965 | orchestrator | 2026-04-09 00:51:51.525973 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-09 00:51:51.525981 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:00.687) 0:03:04.924 ******** 2026-04-09 00:51:51.525988 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:51.525996 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:51.526004 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:51.526012 | orchestrator | 2026-04-09 00:51:51.526098 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:51:51.526109 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:51:51.526119 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-04-09 00:51:51.526128 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:51:51.526147 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:51:51.526156 | orchestrator | 2026-04-09 00:51:51.526165 | orchestrator | 2026-04-09 00:51:51.526173 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:51:51.526182 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:02.556) 0:03:07.480 ******** 2026-04-09 00:51:51.526191 | orchestrator | =============================================================================== 2026-04-09 00:51:51.526219 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 130.84s 2026-04-09 00:51:51.526234 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.09s 2026-04-09 00:51:51.526242 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.83s 2026-04-09 00:51:51.526250 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.13s 2026-04-09 00:51:51.526257 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 3.11s 2026-04-09 00:51:51.526266 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.56s 2026-04-09 00:51:51.526274 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.52s 2026-04-09 00:51:51.526292 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.29s 2026-04-09 00:51:51.526300 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.15s 2026-04-09 00:51:51.526308 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.76s 2026-04-09 00:51:51.526316 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.74s 2026-04-09 00:51:51.526324 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.47s 2026-04-09 00:51:51.526332 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.46s 2026-04-09 00:51:51.526339 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.41s 2026-04-09 00:51:51.526347 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.16s 2026-04-09 00:51:51.526355 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.14s 2026-04-09 00:51:51.526363 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.08s 2026-04-09 00:51:51.526372 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.04s 2026-04-09 00:51:51.526379 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2026-04-09 00:51:51.526387 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 0.94s 2026-04-09 00:51:51.526396 | orchestrator | 2026-04-09 00:51:51 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:51.526404 | orchestrator | 2026-04-09 00:51:51 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:51.526412 | orchestrator | 2026-04-09 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:54.551026 | orchestrator | 2026-04-09 00:51:54 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:54.551661 | orchestrator | 2026-04-09 00:51:54 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:54.552553 | orchestrator | 2026-04-09 00:51:54 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:54.552613 | orchestrator | 2026-04-09 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:57.584400 | orchestrator | 2026-04-09 00:51:57 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:51:57.586155 | orchestrator | 2026-04-09 00:51:57 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:51:57.590557 | orchestrator | 2026-04-09 00:51:57 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:51:57.590658 | orchestrator | 2026-04-09 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:00.621447 | orchestrator | 2026-04-09 00:52:00 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:00.621522 | orchestrator | 2026-04-09 00:52:00 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:00.621794 | orchestrator | 2026-04-09 00:52:00 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:00.621914 | orchestrator | 2026-04-09 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:03.691369 | orchestrator | 2026-04-09 00:52:03 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:03.691538 | orchestrator | 2026-04-09 00:52:03 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:03.692987 | orchestrator | 2026-04-09 00:52:03 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:03.693044 | orchestrator | 2026-04-09 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:06.715499 | orchestrator | 2026-04-09 00:52:06 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:06.715692 | orchestrator | 2026-04-09 00:52:06 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:06.716503 | orchestrator | 2026-04-09 00:52:06 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:06.716559 | orchestrator | 2026-04-09 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:09.746535 | orchestrator | 2026-04-09 00:52:09 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:09.746745 | orchestrator | 2026-04-09 00:52:09 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:09.747454 | orchestrator | 2026-04-09 00:52:09 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:09.747988 | orchestrator | 2026-04-09 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:12.788284 | orchestrator | 2026-04-09 00:52:12 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:12.788460 | orchestrator | 2026-04-09 00:52:12 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:12.790358 | orchestrator | 2026-04-09 00:52:12 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:12.790996 | orchestrator | 2026-04-09 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:15.821222 | orchestrator | 2026-04-09 00:52:15 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:15.822822 | orchestrator | 2026-04-09 00:52:15 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:15.823961 | orchestrator | 2026-04-09 00:52:15 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:15.823989 | orchestrator | 2026-04-09 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:18.854449 | orchestrator | 2026-04-09 00:52:18 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:18.856222 | orchestrator | 2026-04-09 00:52:18 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:18.856433 | orchestrator | 2026-04-09 00:52:18 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:18.856482 | orchestrator | 2026-04-09 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:21.900318 | orchestrator | 2026-04-09 00:52:21 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:21.900402 | orchestrator | 2026-04-09 00:52:21 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:21.901902 | orchestrator | 2026-04-09 00:52:21 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:21.902708 | orchestrator | 2026-04-09 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:24.939543 | orchestrator | 2026-04-09 00:52:24 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:24.939911 | orchestrator | 2026-04-09 00:52:24 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:24.941242 | orchestrator | 2026-04-09 00:52:24 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:24.941307 | orchestrator | 2026-04-09 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:27.980873 | orchestrator | 2026-04-09 00:52:27 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:27.982394 | orchestrator | 2026-04-09 00:52:27 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:27.983901 | orchestrator | 2026-04-09 00:52:27 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:27.984333 | orchestrator | 2026-04-09 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:31.013612 | orchestrator | 2026-04-09 00:52:31 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:31.013716 | orchestrator | 2026-04-09 00:52:31 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:31.014275 | orchestrator | 2026-04-09 00:52:31 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:31.014308 | orchestrator | 2026-04-09 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:34.042988 | orchestrator | 2026-04-09 00:52:34 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:34.043256 | orchestrator | 2026-04-09 00:52:34 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:34.044054 | orchestrator | 2026-04-09 00:52:34 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:34.044090 | orchestrator | 2026-04-09 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:37.063828 | orchestrator | 2026-04-09 00:52:37 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:37.064003 | orchestrator | 2026-04-09 00:52:37 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:37.064882 | orchestrator | 2026-04-09 00:52:37 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:37.064914 | orchestrator | 2026-04-09 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:40.100065 | orchestrator | 2026-04-09 00:52:40 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:40.100892 | orchestrator | 2026-04-09 00:52:40 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:40.100908 | orchestrator | 2026-04-09 00:52:40 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:40.100925 | orchestrator | 2026-04-09 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:43.121487 | orchestrator | 2026-04-09 00:52:43 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:43.121822 | orchestrator | 2026-04-09 00:52:43 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:43.122482 | orchestrator | 2026-04-09 00:52:43 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:43.123531 | orchestrator | 2026-04-09 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:46.143730 | orchestrator | 2026-04-09 00:52:46 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:46.143947 | orchestrator | 2026-04-09 00:52:46 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:46.144414 | orchestrator | 2026-04-09 00:52:46 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:46.144452 | orchestrator | 2026-04-09 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:49.166712 | orchestrator | 2026-04-09 00:52:49 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:49.166883 | orchestrator | 2026-04-09 00:52:49 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:49.169294 | orchestrator | 2026-04-09 00:52:49 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:49.169366 | orchestrator | 2026-04-09 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:52.189279 | orchestrator | 2026-04-09 00:52:52 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:52.189635 | orchestrator | 2026-04-09 00:52:52 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:52.190256 | orchestrator | 2026-04-09 00:52:52 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:52.191698 | orchestrator | 2026-04-09 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:55.228218 | orchestrator | 2026-04-09 00:52:55 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:55.228505 | orchestrator | 2026-04-09 00:52:55 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:55.229266 | orchestrator | 2026-04-09 00:52:55 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:55.229298 | orchestrator | 2026-04-09 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:58.253116 | orchestrator | 2026-04-09 00:52:58 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:52:58.253336 | orchestrator | 2026-04-09 00:52:58 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:52:58.254184 | orchestrator | 2026-04-09 00:52:58 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:52:58.254251 | orchestrator | 2026-04-09 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:01.332316 | orchestrator | 2026-04-09 00:53:01 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:01.332406 | orchestrator | 2026-04-09 00:53:01 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:01.333042 | orchestrator | 2026-04-09 00:53:01 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:01.333112 | orchestrator | 2026-04-09 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:04.362244 | orchestrator | 2026-04-09 00:53:04 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:04.365897 | orchestrator | 2026-04-09 00:53:04 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:04.365980 | orchestrator | 2026-04-09 00:53:04 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:04.365991 | orchestrator | 2026-04-09 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:07.402437 | orchestrator | 2026-04-09 00:53:07 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:07.403189 | orchestrator | 2026-04-09 00:53:07 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:07.403869 | orchestrator | 2026-04-09 00:53:07 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:07.403911 | orchestrator | 2026-04-09 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:10.449323 | orchestrator | 2026-04-09 00:53:10 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:10.451414 | orchestrator | 2026-04-09 00:53:10 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:10.453114 | orchestrator | 2026-04-09 00:53:10 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:10.453176 | orchestrator | 2026-04-09 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:13.477967 | orchestrator | 2026-04-09 00:53:13 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:13.478945 | orchestrator | 2026-04-09 00:53:13 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:13.479637 | orchestrator | 2026-04-09 00:53:13 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:13.479665 | orchestrator | 2026-04-09 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:16.519714 | orchestrator | 2026-04-09 00:53:16 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:16.522799 | orchestrator | 2026-04-09 00:53:16 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:16.525166 | orchestrator | 2026-04-09 00:53:16 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:16.525239 | orchestrator | 2026-04-09 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:19.571313 | orchestrator | 2026-04-09 00:53:19 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:19.571395 | orchestrator | 2026-04-09 00:53:19 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:19.572127 | orchestrator | 2026-04-09 00:53:19 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:19.572156 | orchestrator | 2026-04-09 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:22.618658 | orchestrator | 2026-04-09 00:53:22 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:22.619864 | orchestrator | 2026-04-09 00:53:22 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:22.625135 | orchestrator | 2026-04-09 00:53:22 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:22.625219 | orchestrator | 2026-04-09 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:25.656611 | orchestrator | 2026-04-09 00:53:25 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:25.658333 | orchestrator | 2026-04-09 00:53:25 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state STARTED 2026-04-09 00:53:25.660168 | orchestrator | 2026-04-09 00:53:25 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:25.660237 | orchestrator | 2026-04-09 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:28.697916 | orchestrator | 2026-04-09 00:53:28 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:28.701812 | orchestrator | 2026-04-09 00:53:28 | INFO  | Task 7ccca0e7-ed57-4006-82ef-2f82f409fa1a is in state SUCCESS 2026-04-09 00:53:28.703471 | orchestrator | 2026-04-09 00:53:28.703540 | orchestrator | 2026-04-09 00:53:28.703550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:53:28.703558 | orchestrator | 2026-04-09 00:53:28.703564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:53:28.703586 | orchestrator | Thursday 09 April 2026 00:49:33 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-04-09 00:53:28.703593 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:53:28.703600 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:53:28.703605 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:53:28.703609 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.703612 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.703616 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.703621 | orchestrator | 2026-04-09 00:53:28.703624 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:53:28.703629 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:00.601) 0:00:00.879 ******** 2026-04-09 00:53:28.703633 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-09 00:53:28.703637 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-09 00:53:28.703641 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-09 00:53:28.703645 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-09 00:53:28.703649 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-09 00:53:28.703653 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-09 00:53:28.703657 | orchestrator | 2026-04-09 00:53:28.703660 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-09 00:53:28.703664 | orchestrator | 2026-04-09 00:53:28.703668 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-09 00:53:28.703673 | orchestrator | Thursday 09 April 2026 00:49:35 +0000 (0:00:01.140) 0:00:02.020 ******** 2026-04-09 00:53:28.703678 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:28.703683 | orchestrator | 2026-04-09 00:53:28.703687 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-09 00:53:28.703691 | orchestrator | Thursday 09 April 2026 00:49:36 +0000 (0:00:01.101) 0:00:03.121 ******** 2026-04-09 00:53:28.703697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703744 | orchestrator | 2026-04-09 00:53:28.703760 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-09 00:53:28.703794 | orchestrator | Thursday 09 April 2026 00:49:37 +0000 (0:00:01.512) 0:00:04.634 ******** 2026-04-09 00:53:28.703802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703830 | orchestrator | 2026-04-09 00:53:28.703834 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-09 00:53:28.703838 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:01.571) 0:00:06.206 ******** 2026-04-09 00:53:28.703844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703932 | orchestrator | 2026-04-09 00:53:28.703938 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-09 00:53:28.703945 | orchestrator | Thursday 09 April 2026 00:49:40 +0000 (0:00:01.421) 0:00:07.628 ******** 2026-04-09 00:53:28.703951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703980 | orchestrator | 2026-04-09 00:53:28.703986 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-09 00:53:28.703990 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:01.939) 0:00:09.567 ******** 2026-04-09 00:53:28.703995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.703999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.704003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.704033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.704038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.704153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.704169 | orchestrator | 2026-04-09 00:53:28.704176 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-09 00:53:28.704184 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:01.797) 0:00:11.365 ******** 2026-04-09 00:53:28.704190 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:53:28.704197 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.704204 | orchestrator | } 2026-04-09 00:53:28.704211 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:53:28.704218 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.704224 | orchestrator | } 2026-04-09 00:53:28.704231 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:53:28.704237 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.704247 | orchestrator | } 2026-04-09 00:53:28.704256 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:53:28.704262 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.704268 | orchestrator | } 2026-04-09 00:53:28.704274 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:53:28.704280 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.704286 | orchestrator | } 2026-04-09 00:53:28.704293 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:53:28.704299 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.704326 | orchestrator | } 2026-04-09 00:53:28.704333 | orchestrator | 2026-04-09 00:53:28.704339 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:53:28.704500 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.783) 0:00:12.149 ******** 2026-04-09 00:53:28.704507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.704514 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:53:28.704535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.704547 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:53:28.704552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.704555 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:53:28.704559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.704563 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.704567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.704571 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.704575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.704579 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.704583 | orchestrator | 2026-04-09 00:53:28.704587 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-09 00:53:28.704591 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:00.973) 0:00:13.122 ******** 2026-04-09 00:53:28.704595 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:53:28.704598 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:53:28.704602 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:53:28.704606 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.704612 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.704617 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.704623 | orchestrator | 2026-04-09 00:53:28.704629 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-09 00:53:28.704637 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:02.538) 0:00:15.660 ******** 2026-04-09 00:53:28.704643 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-09 00:53:28.704649 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-09 00:53:28.704655 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-09 00:53:28.704661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-09 00:53:28.704686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-09 00:53:28.704693 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-09 00:53:28.704699 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:53:28.704706 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:53:28.704718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:53:28.704724 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:53:28.704730 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:53:28.704736 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:53:28.704747 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 00:53:28.704761 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 00:53:28.704770 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 00:53:28.704775 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 00:53:28.704781 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 00:53:28.704788 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-09 00:53:28.704794 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:53:28.704801 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:53:28.704806 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:53:28.704811 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:53:28.704816 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:53:28.704822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:53:28.704827 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:53:28.704833 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:53:28.704838 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:53:28.704843 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:53:28.704848 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:53:28.704853 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:53:28.704860 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:53:28.704866 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:53:28.704872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:53:28.704878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:53:28.704884 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:53:28.704890 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 00:53:28.704896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:53:28.704909 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 00:53:28.704915 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 00:53:28.704923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 00:53:28.704927 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 00:53:28.704931 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 00:53:28.704935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-09 00:53:28.704940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-09 00:53:28.704945 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-09 00:53:28.704948 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-09 00:53:28.704952 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-09 00:53:28.704962 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 00:53:28.704966 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-09 00:53:28.704974 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 00:53:28.704978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 00:53:28.704982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 00:53:28.704985 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 00:53:28.704989 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 00:53:28.704993 | orchestrator | 2026-04-09 00:53:28.704997 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:53:28.705001 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:23.473) 0:00:39.133 ******** 2026-04-09 00:53:28.705004 | orchestrator | 2026-04-09 00:53:28.705008 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:53:28.705012 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:00.236) 0:00:39.370 ******** 2026-04-09 00:53:28.705016 | orchestrator | 2026-04-09 00:53:28.705020 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:53:28.705023 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:00.354) 0:00:39.725 ******** 2026-04-09 00:53:28.705027 | orchestrator | 2026-04-09 00:53:28.705031 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:53:28.705034 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.124) 0:00:39.850 ******** 2026-04-09 00:53:28.705038 | orchestrator | 2026-04-09 00:53:28.705063 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:53:28.705070 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.059) 0:00:39.910 ******** 2026-04-09 00:53:28.705078 | orchestrator | 2026-04-09 00:53:28.705085 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:53:28.705104 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.062) 0:00:39.972 ******** 2026-04-09 00:53:28.705110 | orchestrator | 2026-04-09 00:53:28.705115 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-09 00:53:28.705121 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.061) 0:00:40.034 ******** 2026-04-09 00:53:28.705127 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:53:28.705134 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.705139 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:53:28.705145 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:53:28.705151 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.705156 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.705162 | orchestrator | 2026-04-09 00:53:28.705168 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-09 00:53:28.705174 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:02.348) 0:00:42.382 ******** 2026-04-09 00:53:28.705180 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:53:28.705185 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.705190 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.705196 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:53:28.705203 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.705208 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:53:28.705214 | orchestrator | 2026-04-09 00:53:28.705220 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-09 00:53:28.705226 | orchestrator | 2026-04-09 00:53:28.705232 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 00:53:28.705238 | orchestrator | Thursday 09 April 2026 00:50:24 +0000 (0:00:09.173) 0:00:51.556 ******** 2026-04-09 00:53:28.705244 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:28.705250 | orchestrator | 2026-04-09 00:53:28.705257 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 00:53:28.705261 | orchestrator | Thursday 09 April 2026 00:50:25 +0000 (0:00:00.833) 0:00:52.389 ******** 2026-04-09 00:53:28.705265 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:28.705269 | orchestrator | 2026-04-09 00:53:28.705273 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-09 00:53:28.705277 | orchestrator | Thursday 09 April 2026 00:50:26 +0000 (0:00:00.578) 0:00:52.968 ******** 2026-04-09 00:53:28.705281 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.705284 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.705288 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.705292 | orchestrator | 2026-04-09 00:53:28.705296 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-09 00:53:28.705300 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:01.024) 0:00:53.993 ******** 2026-04-09 00:53:28.705303 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.705307 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.705311 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.705314 | orchestrator | 2026-04-09 00:53:28.705318 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-09 00:53:28.705322 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.307) 0:00:54.300 ******** 2026-04-09 00:53:28.705326 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.705329 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.705333 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.705337 | orchestrator | 2026-04-09 00:53:28.705341 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-09 00:53:28.705376 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.319) 0:00:54.619 ******** 2026-04-09 00:53:28.705380 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.705384 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.705388 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.705392 | orchestrator | 2026-04-09 00:53:28.705400 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-09 00:53:28.705409 | orchestrator | Thursday 09 April 2026 00:50:28 +0000 (0:00:00.433) 0:00:55.052 ******** 2026-04-09 00:53:28.705413 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.705417 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.705421 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.705425 | orchestrator | 2026-04-09 00:53:28.705429 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-09 00:53:28.705432 | orchestrator | Thursday 09 April 2026 00:50:28 +0000 (0:00:00.344) 0:00:55.396 ******** 2026-04-09 00:53:28.705436 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705440 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705444 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705448 | orchestrator | 2026-04-09 00:53:28.705452 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-09 00:53:28.705455 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.475) 0:00:55.872 ******** 2026-04-09 00:53:28.705459 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705463 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705467 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705471 | orchestrator | 2026-04-09 00:53:28.705474 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-09 00:53:28.705478 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.303) 0:00:56.175 ******** 2026-04-09 00:53:28.705484 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705491 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705497 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705504 | orchestrator | 2026-04-09 00:53:28.705513 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-09 00:53:28.705523 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.282) 0:00:56.457 ******** 2026-04-09 00:53:28.705529 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705535 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705542 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705549 | orchestrator | 2026-04-09 00:53:28.705555 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-09 00:53:28.705561 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.272) 0:00:56.730 ******** 2026-04-09 00:53:28.705567 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705573 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705579 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705586 | orchestrator | 2026-04-09 00:53:28.705592 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-09 00:53:28.705598 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:00.566) 0:00:57.296 ******** 2026-04-09 00:53:28.705604 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705611 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705617 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705624 | orchestrator | 2026-04-09 00:53:28.705631 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-09 00:53:28.705638 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:00.332) 0:00:57.629 ******** 2026-04-09 00:53:28.705644 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705651 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705658 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705664 | orchestrator | 2026-04-09 00:53:28.705671 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-09 00:53:28.705679 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:00.296) 0:00:57.925 ******** 2026-04-09 00:53:28.705683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705687 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705691 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705695 | orchestrator | 2026-04-09 00:53:28.705699 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-09 00:53:28.705708 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:00.276) 0:00:58.202 ******** 2026-04-09 00:53:28.705712 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705719 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705725 | orchestrator | 2026-04-09 00:53:28.705731 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-09 00:53:28.705736 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:00.486) 0:00:58.688 ******** 2026-04-09 00:53:28.705745 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705753 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705758 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705764 | orchestrator | 2026-04-09 00:53:28.705770 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-09 00:53:28.705776 | orchestrator | Thursday 09 April 2026 00:50:32 +0000 (0:00:00.334) 0:00:59.023 ******** 2026-04-09 00:53:28.705781 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705787 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705792 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705797 | orchestrator | 2026-04-09 00:53:28.705803 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-09 00:53:28.705809 | orchestrator | Thursday 09 April 2026 00:50:32 +0000 (0:00:00.283) 0:00:59.306 ******** 2026-04-09 00:53:28.705815 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.705820 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.705826 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.705832 | orchestrator | 2026-04-09 00:53:28.705838 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 00:53:28.705843 | orchestrator | Thursday 09 April 2026 00:50:32 +0000 (0:00:00.278) 0:00:59.585 ******** 2026-04-09 00:53:28.705851 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:28.705857 | orchestrator | 2026-04-09 00:53:28.706366 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-09 00:53:28.706397 | orchestrator | Thursday 09 April 2026 00:50:33 +0000 (0:00:00.672) 0:01:00.257 ******** 2026-04-09 00:53:28.706404 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.706410 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.706415 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.706422 | orchestrator | 2026-04-09 00:53:28.706434 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-09 00:53:28.706439 | orchestrator | Thursday 09 April 2026 00:50:33 +0000 (0:00:00.399) 0:01:00.656 ******** 2026-04-09 00:53:28.706445 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.706452 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.706458 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.706476 | orchestrator | 2026-04-09 00:53:28.706483 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-09 00:53:28.706497 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:00.366) 0:01:01.023 ******** 2026-04-09 00:53:28.706503 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.706510 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.706515 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.706521 | orchestrator | 2026-04-09 00:53:28.706528 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-09 00:53:28.706534 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:00.393) 0:01:01.416 ******** 2026-04-09 00:53:28.706541 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.706545 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.706549 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.706553 | orchestrator | 2026-04-09 00:53:28.706557 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-09 00:53:28.706561 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:00.270) 0:01:01.686 ******** 2026-04-09 00:53:28.706576 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.706579 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.706583 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.706588 | orchestrator | 2026-04-09 00:53:28.706594 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-09 00:53:28.706599 | orchestrator | Thursday 09 April 2026 00:50:35 +0000 (0:00:00.281) 0:01:01.968 ******** 2026-04-09 00:53:28.706605 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.706613 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.706621 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.706630 | orchestrator | 2026-04-09 00:53:28.706635 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-09 00:53:28.706641 | orchestrator | Thursday 09 April 2026 00:50:35 +0000 (0:00:00.294) 0:01:02.263 ******** 2026-04-09 00:53:28.706647 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.706652 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.706658 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.706664 | orchestrator | 2026-04-09 00:53:28.706669 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-09 00:53:28.706674 | orchestrator | Thursday 09 April 2026 00:50:35 +0000 (0:00:00.275) 0:01:02.538 ******** 2026-04-09 00:53:28.706681 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.706686 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.706692 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.706699 | orchestrator | 2026-04-09 00:53:28.706705 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-09 00:53:28.706711 | orchestrator | Thursday 09 April 2026 00:50:36 +0000 (0:00:00.392) 0:01:02.930 ******** 2026-04-09 00:53:28.706720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.706803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.706812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.706831 | orchestrator | 2026-04-09 00:53:28.706838 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-09 00:53:28.706842 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:02.300) 0:01:05.230 ******** 2026-04-09 00:53:28.706846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.706892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.706904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.706911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.706918 | orchestrator | 2026-04-09 00:53:28.706925 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-09 00:53:28.706929 | orchestrator | Thursday 09 April 2026 00:50:43 +0000 (0:00:05.443) 0:01:10.673 ******** 2026-04-09 00:53:28.706933 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-09 00:53:28.706937 | orchestrator | 2026-04-09 00:53:28.706941 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-09 00:53:28.706945 | orchestrator | Thursday 09 April 2026 00:50:44 +0000 (0:00:00.729) 0:01:11.403 ******** 2026-04-09 00:53:28.706949 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.706953 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.706957 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.706961 | orchestrator | 2026-04-09 00:53:28.706966 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-09 00:53:28.706970 | orchestrator | Thursday 09 April 2026 00:50:45 +0000 (0:00:00.748) 0:01:12.152 ******** 2026-04-09 00:53:28.706975 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.706979 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.706984 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.706988 | orchestrator | 2026-04-09 00:53:28.706992 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-09 00:53:28.707001 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:01.571) 0:01:13.724 ******** 2026-04-09 00:53:28.707005 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.707010 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.707014 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.707018 | orchestrator | 2026-04-09 00:53:28.707023 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-09 00:53:28.707027 | orchestrator | Thursday 09 April 2026 00:50:48 +0000 (0:00:01.977) 0:01:15.701 ******** 2026-04-09 00:53:28.707035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707131 | orchestrator | 2026-04-09 00:53:28.707136 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-09 00:53:28.707141 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:04.909) 0:01:20.610 ******** 2026-04-09 00:53:28.707145 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:53:28.707150 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.707154 | orchestrator | } 2026-04-09 00:53:28.707159 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:53:28.707163 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.707167 | orchestrator | } 2026-04-09 00:53:28.707172 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:53:28.707176 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.707180 | orchestrator | } 2026-04-09 00:53:28.707185 | orchestrator | 2026-04-09 00:53:28.707189 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:53:28.707193 | orchestrator | Thursday 09 April 2026 00:50:54 +0000 (0:00:00.542) 0:01:21.153 ******** 2026-04-09 00:53:28.707198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707277 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707283 | orchestrator | 2026-04-09 00:53:28.707290 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-09 00:53:28.707296 | orchestrator | Thursday 09 April 2026 00:50:56 +0000 (0:00:02.231) 0:01:23.385 ******** 2026-04-09 00:53:28.707302 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-09 00:53:28.707308 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-09 00:53:28.707314 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-09 00:53:28.707320 | orchestrator | 2026-04-09 00:53:28.707326 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-09 00:53:28.707332 | orchestrator | Thursday 09 April 2026 00:51:16 +0000 (0:00:20.290) 0:01:43.676 ******** 2026-04-09 00:53:28.707338 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:53:28.707344 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.707351 | orchestrator | } 2026-04-09 00:53:28.707358 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:53:28.707364 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.707371 | orchestrator | } 2026-04-09 00:53:28.707377 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:53:28.707384 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.707390 | orchestrator | } 2026-04-09 00:53:28.707396 | orchestrator | 2026-04-09 00:53:28.707407 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:53:28.707413 | orchestrator | Thursday 09 April 2026 00:51:17 +0000 (0:00:00.464) 0:01:44.141 ******** 2026-04-09 00:53:28.707419 | orchestrator | 2026-04-09 00:53:28.707429 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:53:28.707436 | orchestrator | Thursday 09 April 2026 00:51:17 +0000 (0:00:00.059) 0:01:44.200 ******** 2026-04-09 00:53:28.707442 | orchestrator | 2026-04-09 00:53:28.707448 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:53:28.707455 | orchestrator | Thursday 09 April 2026 00:51:17 +0000 (0:00:00.059) 0:01:44.260 ******** 2026-04-09 00:53:28.707461 | orchestrator | 2026-04-09 00:53:28.707467 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-09 00:53:28.707474 | orchestrator | Thursday 09 April 2026 00:51:17 +0000 (0:00:00.061) 0:01:44.322 ******** 2026-04-09 00:53:28.707478 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.707482 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.707486 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.707490 | orchestrator | 2026-04-09 00:53:28.707493 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-09 00:53:28.707497 | orchestrator | Thursday 09 April 2026 00:51:30 +0000 (0:00:13.302) 0:01:57.624 ******** 2026-04-09 00:53:28.707505 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.707509 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.707512 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.707516 | orchestrator | 2026-04-09 00:53:28.707520 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-09 00:53:28.707524 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:13.531) 0:02:11.155 ******** 2026-04-09 00:53:28.707528 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-09 00:53:28.707531 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-09 00:53:28.707535 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-09 00:53:28.707539 | orchestrator | 2026-04-09 00:53:28.707543 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-09 00:53:28.707546 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:14.167) 0:02:25.323 ******** 2026-04-09 00:53:28.707550 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.707554 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.707558 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.707561 | orchestrator | 2026-04-09 00:53:28.707565 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-09 00:53:28.707569 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:09.222) 0:02:34.545 ******** 2026-04-09 00:53:28.707573 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.707577 | orchestrator | 2026-04-09 00:53:28.707580 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-09 00:53:28.707584 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:00.099) 0:02:34.644 ******** 2026-04-09 00:53:28.707588 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707592 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707595 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.707599 | orchestrator | 2026-04-09 00:53:28.707603 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-09 00:53:28.707607 | orchestrator | Thursday 09 April 2026 00:52:08 +0000 (0:00:01.106) 0:02:35.751 ******** 2026-04-09 00:53:28.707611 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.707615 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.707618 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.707622 | orchestrator | 2026-04-09 00:53:28.707626 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-09 00:53:28.707630 | orchestrator | Thursday 09 April 2026 00:52:09 +0000 (0:00:00.624) 0:02:36.376 ******** 2026-04-09 00:53:28.707633 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707637 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707641 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.707645 | orchestrator | 2026-04-09 00:53:28.707648 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-09 00:53:28.707652 | orchestrator | Thursday 09 April 2026 00:52:10 +0000 (0:00:00.765) 0:02:37.141 ******** 2026-04-09 00:53:28.707656 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.707660 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.707663 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.707667 | orchestrator | 2026-04-09 00:53:28.707671 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-09 00:53:28.707675 | orchestrator | Thursday 09 April 2026 00:52:10 +0000 (0:00:00.579) 0:02:37.721 ******** 2026-04-09 00:53:28.707678 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707682 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707686 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.707690 | orchestrator | 2026-04-09 00:53:28.707693 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-09 00:53:28.707697 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:00.716) 0:02:38.438 ******** 2026-04-09 00:53:28.707701 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707704 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707708 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.707712 | orchestrator | 2026-04-09 00:53:28.707719 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-09 00:53:28.707723 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:00.902) 0:02:39.340 ******** 2026-04-09 00:53:28.707727 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-09 00:53:28.707731 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-09 00:53:28.707734 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-09 00:53:28.707738 | orchestrator | 2026-04-09 00:53:28.707742 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-09 00:53:28.707746 | orchestrator | Thursday 09 April 2026 00:52:13 +0000 (0:00:00.905) 0:02:40.245 ******** 2026-04-09 00:53:28.707749 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707753 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707757 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.707761 | orchestrator | 2026-04-09 00:53:28.707764 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-09 00:53:28.707771 | orchestrator | Thursday 09 April 2026 00:52:13 +0000 (0:00:00.229) 0:02:40.475 ******** 2026-04-09 00:53:28.707778 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707787 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707791 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707800 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707809 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707823 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707831 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707839 | orchestrator | 2026-04-09 00:53:28.707843 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-09 00:53:28.707847 | orchestrator | Thursday 09 April 2026 00:52:16 +0000 (0:00:02.578) 0:02:43.053 ******** 2026-04-09 00:53:28.707851 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707858 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707885 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.707911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.707915 | orchestrator | 2026-04-09 00:53:28.707919 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-09 00:53:28.707922 | orchestrator | Thursday 09 April 2026 00:52:22 +0000 (0:00:05.856) 0:02:48.909 ******** 2026-04-09 00:53:28.707929 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-09 00:53:28.707933 | orchestrator | 2026-04-09 00:53:28.707937 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-09 00:53:28.707940 | orchestrator | Thursday 09 April 2026 00:52:22 +0000 (0:00:00.449) 0:02:49.359 ******** 2026-04-09 00:53:28.707944 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707948 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707952 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.707956 | orchestrator | 2026-04-09 00:53:28.707959 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-09 00:53:28.707963 | orchestrator | Thursday 09 April 2026 00:52:23 +0000 (0:00:00.646) 0:02:50.006 ******** 2026-04-09 00:53:28.707967 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707971 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707975 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.707978 | orchestrator | 2026-04-09 00:53:28.707982 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-09 00:53:28.707986 | orchestrator | Thursday 09 April 2026 00:52:24 +0000 (0:00:01.533) 0:02:51.540 ******** 2026-04-09 00:53:28.707990 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.707993 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.707997 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.708001 | orchestrator | 2026-04-09 00:53:28.708005 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-09 00:53:28.708009 | orchestrator | Thursday 09 April 2026 00:52:26 +0000 (0:00:01.584) 0:02:53.125 ******** 2026-04-09 00:53:28.708012 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708019 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708024 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708032 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708115 | orchestrator | 2026-04-09 00:53:28.708119 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-09 00:53:28.708123 | orchestrator | Thursday 09 April 2026 00:52:30 +0000 (0:00:04.222) 0:02:57.347 ******** 2026-04-09 00:53:28.708126 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 00:53:28.708130 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.708134 | orchestrator | } 2026-04-09 00:53:28.708138 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:53:28.708142 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.708146 | orchestrator | } 2026-04-09 00:53:28.708170 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:53:28.708175 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.708178 | orchestrator | } 2026-04-09 00:53:28.708182 | orchestrator | 2026-04-09 00:53:28.708186 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:53:28.708190 | orchestrator | Thursday 09 April 2026 00:52:30 +0000 (0:00:00.285) 0:02:57.633 ******** 2026-04-09 00:53:28.708200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:28.708247 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:53:28.708254 | orchestrator | 2026-04-09 00:53:28.708258 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-09 00:53:28.708262 | orchestrator | Thursday 09 April 2026 00:52:33 +0000 (0:00:02.563) 0:03:00.197 ******** 2026-04-09 00:53:28.708266 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-09 00:53:28.708270 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-09 00:53:28.708274 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-09 00:53:28.708277 | orchestrator | 2026-04-09 00:53:28.708281 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-09 00:53:28.708285 | orchestrator | Thursday 09 April 2026 00:52:54 +0000 (0:00:21.137) 0:03:21.335 ******** 2026-04-09 00:53:28.708289 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 00:53:28.708293 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.708297 | orchestrator | } 2026-04-09 00:53:28.708300 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 00:53:28.708304 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.708308 | orchestrator | } 2026-04-09 00:53:28.708312 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 00:53:28.708315 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:53:28.708319 | orchestrator | } 2026-04-09 00:53:28.708323 | orchestrator | 2026-04-09 00:53:28.708327 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:53:28.708331 | orchestrator | Thursday 09 April 2026 00:52:55 +0000 (0:00:00.454) 0:03:21.789 ******** 2026-04-09 00:53:28.708335 | orchestrator | 2026-04-09 00:53:28.708338 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:53:28.708342 | orchestrator | Thursday 09 April 2026 00:52:55 +0000 (0:00:00.168) 0:03:21.958 ******** 2026-04-09 00:53:28.708346 | orchestrator | 2026-04-09 00:53:28.708350 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:53:28.708354 | orchestrator | Thursday 09 April 2026 00:52:55 +0000 (0:00:00.077) 0:03:22.035 ******** 2026-04-09 00:53:28.708358 | orchestrator | 2026-04-09 00:53:28.708361 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-09 00:53:28.708365 | orchestrator | Thursday 09 April 2026 00:52:55 +0000 (0:00:00.062) 0:03:22.098 ******** 2026-04-09 00:53:28.708369 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.708373 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.708377 | orchestrator | 2026-04-09 00:53:28.708380 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-09 00:53:28.708384 | orchestrator | Thursday 09 April 2026 00:53:07 +0000 (0:00:11.960) 0:03:34.058 ******** 2026-04-09 00:53:28.708388 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:28.708392 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:28.708396 | orchestrator | 2026-04-09 00:53:28.708400 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-09 00:53:28.708403 | orchestrator | Thursday 09 April 2026 00:53:19 +0000 (0:00:11.783) 0:03:45.841 ******** 2026-04-09 00:53:28.708407 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:28.708411 | orchestrator | 2026-04-09 00:53:28.708415 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-09 00:53:28.708418 | orchestrator | Thursday 09 April 2026 00:53:19 +0000 (0:00:00.114) 0:03:45.956 ******** 2026-04-09 00:53:28.708422 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.708426 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.708430 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.708436 | orchestrator | 2026-04-09 00:53:28.708442 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-09 00:53:28.708452 | orchestrator | Thursday 09 April 2026 00:53:20 +0000 (0:00:00.995) 0:03:46.951 ******** 2026-04-09 00:53:28.708457 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.708463 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.708469 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.708475 | orchestrator | 2026-04-09 00:53:28.708481 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-09 00:53:28.708487 | orchestrator | Thursday 09 April 2026 00:53:21 +0000 (0:00:01.123) 0:03:48.075 ******** 2026-04-09 00:53:28.708492 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.708499 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.708506 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.708512 | orchestrator | 2026-04-09 00:53:28.708517 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-09 00:53:28.708523 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.866) 0:03:48.941 ******** 2026-04-09 00:53:28.708529 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:28.708535 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:28.708541 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:28.708547 | orchestrator | 2026-04-09 00:53:28.708553 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-09 00:53:28.708559 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.663) 0:03:49.605 ******** 2026-04-09 00:53:28.708565 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.708575 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.708581 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.708587 | orchestrator | 2026-04-09 00:53:28.708593 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-09 00:53:28.708599 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:01.116) 0:03:50.722 ******** 2026-04-09 00:53:28.708611 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:28.708615 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:28.708619 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:28.708623 | orchestrator | 2026-04-09 00:53:28.708627 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-09 00:53:28.708630 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:01.560) 0:03:52.283 ******** 2026-04-09 00:53:28.708634 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-09 00:53:28.708638 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-09 00:53:28.708642 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-09 00:53:28.708645 | orchestrator | 2026-04-09 00:53:28.708649 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:53:28.708653 | orchestrator | testbed-node-0 : ok=64  changed=26  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 00:53:28.708658 | orchestrator | testbed-node-1 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-09 00:53:28.708662 | orchestrator | testbed-node-2 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-09 00:53:28.708666 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:53:28.708670 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:53:28.708673 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:53:28.708677 | orchestrator | 2026-04-09 00:53:28.708681 | orchestrator | 2026-04-09 00:53:28.708685 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:53:28.708689 | orchestrator | Thursday 09 April 2026 00:53:26 +0000 (0:00:01.029) 0:03:53.313 ******** 2026-04-09 00:53:28.708697 | orchestrator | =============================================================================== 2026-04-09 00:53:28.708700 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 25.31s 2026-04-09 00:53:28.708704 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 25.26s 2026-04-09 00:53:28.708708 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.47s 2026-04-09 00:53:28.708712 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 21.14s 2026-04-09 00:53:28.708716 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 20.29s 2026-04-09 00:53:28.708719 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 14.17s 2026-04-09 00:53:28.708723 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.22s 2026-04-09 00:53:28.708727 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 9.18s 2026-04-09 00:53:28.708730 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.86s 2026-04-09 00:53:28.708734 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.44s 2026-04-09 00:53:28.708738 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.91s 2026-04-09 00:53:28.708742 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.22s 2026-04-09 00:53:28.708745 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.58s 2026-04-09 00:53:28.708749 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.56s 2026-04-09 00:53:28.708753 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.54s 2026-04-09 00:53:28.708757 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.35s 2026-04-09 00:53:28.708760 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.30s 2026-04-09 00:53:28.708764 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.23s 2026-04-09 00:53:28.708768 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.98s 2026-04-09 00:53:28.708772 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.94s 2026-04-09 00:53:28.708776 | orchestrator | 2026-04-09 00:53:28 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:28.708780 | orchestrator | 2026-04-09 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:31.744643 | orchestrator | 2026-04-09 00:53:31 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:31.744985 | orchestrator | 2026-04-09 00:53:31 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:31.745117 | orchestrator | 2026-04-09 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:34.782847 | orchestrator | 2026-04-09 00:53:34 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:34.785719 | orchestrator | 2026-04-09 00:53:34 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:34.785801 | orchestrator | 2026-04-09 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:37.835158 | orchestrator | 2026-04-09 00:53:37 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:37.836508 | orchestrator | 2026-04-09 00:53:37 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:37.836565 | orchestrator | 2026-04-09 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:40.865729 | orchestrator | 2026-04-09 00:53:40 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:40.868077 | orchestrator | 2026-04-09 00:53:40 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:40.868315 | orchestrator | 2026-04-09 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:43.899469 | orchestrator | 2026-04-09 00:53:43 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:43.901618 | orchestrator | 2026-04-09 00:53:43 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:43.901676 | orchestrator | 2026-04-09 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:46.937302 | orchestrator | 2026-04-09 00:53:46 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:46.941725 | orchestrator | 2026-04-09 00:53:46 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:46.941832 | orchestrator | 2026-04-09 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:49.991378 | orchestrator | 2026-04-09 00:53:49 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:49.991996 | orchestrator | 2026-04-09 00:53:49 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:49.992096 | orchestrator | 2026-04-09 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:53.045331 | orchestrator | 2026-04-09 00:53:53 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:53.045399 | orchestrator | 2026-04-09 00:53:53 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:53.045406 | orchestrator | 2026-04-09 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:56.086574 | orchestrator | 2026-04-09 00:53:56 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:56.088300 | orchestrator | 2026-04-09 00:53:56 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:56.088387 | orchestrator | 2026-04-09 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:59.121499 | orchestrator | 2026-04-09 00:53:59 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:53:59.122723 | orchestrator | 2026-04-09 00:53:59 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:53:59.122756 | orchestrator | 2026-04-09 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:02.162421 | orchestrator | 2026-04-09 00:54:02 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:02.166237 | orchestrator | 2026-04-09 00:54:02 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:02.166324 | orchestrator | 2026-04-09 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:05.207481 | orchestrator | 2026-04-09 00:54:05 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:05.209375 | orchestrator | 2026-04-09 00:54:05 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:05.209718 | orchestrator | 2026-04-09 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:08.255422 | orchestrator | 2026-04-09 00:54:08 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:08.256665 | orchestrator | 2026-04-09 00:54:08 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:08.257099 | orchestrator | 2026-04-09 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:11.298858 | orchestrator | 2026-04-09 00:54:11 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:11.298957 | orchestrator | 2026-04-09 00:54:11 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:11.298978 | orchestrator | 2026-04-09 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:14.338515 | orchestrator | 2026-04-09 00:54:14 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:14.338592 | orchestrator | 2026-04-09 00:54:14 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:14.338599 | orchestrator | 2026-04-09 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:17.365964 | orchestrator | 2026-04-09 00:54:17 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:17.366160 | orchestrator | 2026-04-09 00:54:17 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:17.366173 | orchestrator | 2026-04-09 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:20.399744 | orchestrator | 2026-04-09 00:54:20 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:20.401422 | orchestrator | 2026-04-09 00:54:20 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:20.401498 | orchestrator | 2026-04-09 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:23.444845 | orchestrator | 2026-04-09 00:54:23 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:23.446528 | orchestrator | 2026-04-09 00:54:23 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:23.446579 | orchestrator | 2026-04-09 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:26.487027 | orchestrator | 2026-04-09 00:54:26 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:26.487086 | orchestrator | 2026-04-09 00:54:26 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:26.487096 | orchestrator | 2026-04-09 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:29.526779 | orchestrator | 2026-04-09 00:54:29 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:29.527863 | orchestrator | 2026-04-09 00:54:29 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:29.527908 | orchestrator | 2026-04-09 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:32.566435 | orchestrator | 2026-04-09 00:54:32 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:32.568373 | orchestrator | 2026-04-09 00:54:32 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:32.568416 | orchestrator | 2026-04-09 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:35.610428 | orchestrator | 2026-04-09 00:54:35 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:35.612270 | orchestrator | 2026-04-09 00:54:35 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:35.612329 | orchestrator | 2026-04-09 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:38.654334 | orchestrator | 2026-04-09 00:54:38 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:38.654719 | orchestrator | 2026-04-09 00:54:38 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:38.654758 | orchestrator | 2026-04-09 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:41.695334 | orchestrator | 2026-04-09 00:54:41 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:41.696235 | orchestrator | 2026-04-09 00:54:41 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state STARTED 2026-04-09 00:54:41.696276 | orchestrator | 2026-04-09 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:44.736158 | orchestrator | 2026-04-09 00:54:44 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:44.738518 | orchestrator | 2026-04-09 00:54:44 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:54:44.740093 | orchestrator | 2026-04-09 00:54:44 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:54:44.749218 | orchestrator | 2026-04-09 00:54:44 | INFO  | Task 30411737-194c-4820-a2ff-283e074dcd91 is in state SUCCESS 2026-04-09 00:54:44.751034 | orchestrator | 2026-04-09 00:54:44.751100 | orchestrator | 2026-04-09 00:54:44.751116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:54:44.751128 | orchestrator | 2026-04-09 00:54:44.751139 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:54:44.751152 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.381) 0:00:00.381 ******** 2026-04-09 00:54:44.751176 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.751188 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.751200 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.751211 | orchestrator | 2026-04-09 00:54:44.751224 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:54:44.751235 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:00.314) 0:00:00.695 ******** 2026-04-09 00:54:44.751248 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-09 00:54:44.751260 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-09 00:54:44.751271 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-09 00:54:44.751284 | orchestrator | 2026-04-09 00:54:44.751295 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-09 00:54:44.751331 | orchestrator | 2026-04-09 00:54:44.751340 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 00:54:44.751348 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.239) 0:00:00.935 ******** 2026-04-09 00:54:44.751355 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.751362 | orchestrator | 2026-04-09 00:54:44.751369 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-09 00:54:44.751376 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.804) 0:00:01.740 ******** 2026-04-09 00:54:44.751383 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.751390 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.751439 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.751451 | orchestrator | 2026-04-09 00:54:44.751467 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 00:54:44.751512 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:01.247) 0:00:02.987 ******** 2026-04-09 00:54:44.751590 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.751605 | orchestrator | 2026-04-09 00:54:44.751617 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-09 00:54:44.751690 | orchestrator | Thursday 09 April 2026 00:48:30 +0000 (0:00:01.144) 0:00:04.132 ******** 2026-04-09 00:54:44.751703 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.751714 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.751726 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.751737 | orchestrator | 2026-04-09 00:54:44.751750 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-09 00:54:44.751782 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:00.809) 0:00:04.942 ******** 2026-04-09 00:54:44.751797 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:54:44.751810 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:54:44.751822 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:54:44.751830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:54:44.751838 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:54:44.751847 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:54:44.751855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:54:44.751862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:54:44.751870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:54:44.751878 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:54:44.751886 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:54:44.751893 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:54:44.751901 | orchestrator | 2026-04-09 00:54:44.751908 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 00:54:44.751916 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:03.132) 0:00:08.075 ******** 2026-04-09 00:54:44.751924 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 00:54:44.751932 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 00:54:44.751940 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 00:54:44.752022 | orchestrator | 2026-04-09 00:54:44.752057 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 00:54:44.752064 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:00:00.872) 0:00:08.947 ******** 2026-04-09 00:54:44.752070 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 00:54:44.752077 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 00:54:44.752084 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 00:54:44.752091 | orchestrator | 2026-04-09 00:54:44.752097 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 00:54:44.752128 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:01.859) 0:00:10.806 ******** 2026-04-09 00:54:44.752135 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-09 00:54:44.752142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.752165 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-09 00:54:44.752214 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.752221 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-09 00:54:44.752227 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.752234 | orchestrator | 2026-04-09 00:54:44.752305 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-09 00:54:44.752319 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:00.929) 0:00:11.736 ******** 2026-04-09 00:54:44.752330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.752348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.752356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.752363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.752370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.752443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.752455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.752463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.752475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.752482 | orchestrator | 2026-04-09 00:54:44.752490 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-09 00:54:44.752496 | orchestrator | Thursday 09 April 2026 00:48:40 +0000 (0:00:02.967) 0:00:14.703 ******** 2026-04-09 00:54:44.752503 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.752510 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.752517 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.752524 | orchestrator | 2026-04-09 00:54:44.752531 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-09 00:54:44.752538 | orchestrator | Thursday 09 April 2026 00:48:41 +0000 (0:00:01.052) 0:00:15.756 ******** 2026-04-09 00:54:44.752545 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-09 00:54:44.752551 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-09 00:54:44.752558 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-09 00:54:44.752565 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-09 00:54:44.752572 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-09 00:54:44.752579 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-09 00:54:44.752585 | orchestrator | 2026-04-09 00:54:44.752592 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-09 00:54:44.752615 | orchestrator | Thursday 09 April 2026 00:48:44 +0000 (0:00:03.097) 0:00:18.853 ******** 2026-04-09 00:54:44.752623 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.752629 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.752636 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.752643 | orchestrator | 2026-04-09 00:54:44.752650 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-09 00:54:44.752657 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:01.629) 0:00:20.483 ******** 2026-04-09 00:54:44.752664 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.752671 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.752677 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.752684 | orchestrator | 2026-04-09 00:54:44.752708 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-09 00:54:44.752715 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:01.644) 0:00:22.127 ******** 2026-04-09 00:54:44.752722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.752736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.752751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.752759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:54:44.752766 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.752773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.752781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.752788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.752795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:54:44.752806 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.752822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.752830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.752837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.752887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:54:44.752895 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.752902 | orchestrator | 2026-04-09 00:54:44.752909 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-09 00:54:44.752968 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:01.217) 0:00:23.344 ******** 2026-04-09 00:54:44.752977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.752984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.753028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:54:44.753070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.753106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.753215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:54:44.753266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab', '__omit_place_holder__042af9688da66b3cc7659811e74491ff55fa35ab'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:54:44.753275 | orchestrator | 2026-04-09 00:54:44.753282 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-09 00:54:44.753289 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:04.505) 0:00:27.849 ******** 2026-04-09 00:54:44.753296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.753356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.753363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.753375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.753382 | orchestrator | 2026-04-09 00:54:44.753389 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-09 00:54:44.753395 | orchestrator | Thursday 09 April 2026 00:48:57 +0000 (0:00:03.539) 0:00:31.389 ******** 2026-04-09 00:54:44.753402 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:54:44.753409 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:54:44.753416 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:54:44.753423 | orchestrator | 2026-04-09 00:54:44.753456 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-09 00:54:44.753464 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:02.398) 0:00:33.788 ******** 2026-04-09 00:54:44.753470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:54:44.753477 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:54:44.753484 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:54:44.753491 | orchestrator | 2026-04-09 00:54:44.753502 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-09 00:54:44.753509 | orchestrator | Thursday 09 April 2026 00:49:04 +0000 (0:00:04.734) 0:00:38.523 ******** 2026-04-09 00:54:44.753516 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.753523 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.753533 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.753540 | orchestrator | 2026-04-09 00:54:44.753547 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-09 00:54:44.753553 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:00.781) 0:00:39.304 ******** 2026-04-09 00:54:44.753599 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:54:44.753608 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:54:44.753615 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:54:44.753622 | orchestrator | 2026-04-09 00:54:44.753629 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-09 00:54:44.753636 | orchestrator | Thursday 09 April 2026 00:49:07 +0000 (0:00:01.973) 0:00:41.278 ******** 2026-04-09 00:54:44.753643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:54:44.753692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:54:44.753701 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:54:44.753708 | orchestrator | 2026-04-09 00:54:44.753715 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 00:54:44.753721 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:01.935) 0:00:43.213 ******** 2026-04-09 00:54:44.753728 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.753740 | orchestrator | 2026-04-09 00:54:44.753767 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-09 00:54:44.753774 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:00.467) 0:00:43.681 ******** 2026-04-09 00:54:44.753781 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-09 00:54:44.753821 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-09 00:54:44.753828 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-09 00:54:44.753835 | orchestrator | 2026-04-09 00:54:44.753842 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-09 00:54:44.753849 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:01.921) 0:00:45.602 ******** 2026-04-09 00:54:44.753856 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-09 00:54:44.753863 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-09 00:54:44.753870 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-09 00:54:44.753877 | orchestrator | 2026-04-09 00:54:44.753883 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-09 00:54:44.753890 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:01.791) 0:00:47.394 ******** 2026-04-09 00:54:44.753897 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.754108 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.754125 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.754132 | orchestrator | 2026-04-09 00:54:44.754139 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-09 00:54:44.754146 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:00.234) 0:00:47.628 ******** 2026-04-09 00:54:44.754153 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.754163 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.754174 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.754204 | orchestrator | 2026-04-09 00:54:44.754270 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 00:54:44.754284 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:00.255) 0:00:47.884 ******** 2026-04-09 00:54:44.754298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.754320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.754342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.754369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.754381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.754549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.754567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.754577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.754595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.754606 | orchestrator | 2026-04-09 00:54:44.754622 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 00:54:44.754633 | orchestrator | Thursday 09 April 2026 00:49:17 +0000 (0:00:03.298) 0:00:51.182 ******** 2026-04-09 00:54:44.754643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.754663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.754676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.754687 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.754699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.754740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.754753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.754778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.754797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.754907 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.754921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.754932 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.754965 | orchestrator | 2026-04-09 00:54:44.754978 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 00:54:44.754991 | orchestrator | Thursday 09 April 2026 00:49:18 +0000 (0:00:00.755) 0:00:51.938 ******** 2026-04-09 00:54:44.755003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.755015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.755027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.755061 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.755127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.755154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.755169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.755181 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.755195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.755208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.755245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.755259 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.755306 | orchestrator | 2026-04-09 00:54:44.755318 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-09 00:54:44.755330 | orchestrator | Thursday 09 April 2026 00:49:18 +0000 (0:00:00.852) 0:00:52.790 ******** 2026-04-09 00:54:44.755342 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:54:44.755355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:54:44.755440 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:54:44.755453 | orchestrator | 2026-04-09 00:54:44.755465 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-09 00:54:44.755477 | orchestrator | Thursday 09 April 2026 00:49:20 +0000 (0:00:01.471) 0:00:54.262 ******** 2026-04-09 00:54:44.755492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:54:44.755512 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:54:44.755525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:54:44.755537 | orchestrator | 2026-04-09 00:54:44.755592 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-09 00:54:44.755610 | orchestrator | Thursday 09 April 2026 00:49:21 +0000 (0:00:01.377) 0:00:55.640 ******** 2026-04-09 00:54:44.755622 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:54:44.755649 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:54:44.755663 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:54:44.755676 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.755689 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:54:44.755702 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:54:44.755748 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.755770 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:54:44.755783 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.755843 | orchestrator | 2026-04-09 00:54:44.755857 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-09 00:54:44.755870 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:01.912) 0:00:57.552 ******** 2026-04-09 00:54:44.755884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.755898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.755912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.755938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.755979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.755993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.756006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.756019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.756058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.756072 | orchestrator | 2026-04-09 00:54:44.756084 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-09 00:54:44.756095 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:02.382) 0:00:59.935 ******** 2026-04-09 00:54:44.756116 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:54:44.756129 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:54:44.756141 | orchestrator | } 2026-04-09 00:54:44.756153 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:54:44.756164 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:54:44.756176 | orchestrator | } 2026-04-09 00:54:44.756188 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:54:44.756200 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:54:44.756212 | orchestrator | } 2026-04-09 00:54:44.756224 | orchestrator | 2026-04-09 00:54:44.756260 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:54:44.756274 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.520) 0:01:00.455 ******** 2026-04-09 00:54:44.756287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.756311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.756332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.756345 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.756358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.756372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.756386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.756408 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.756420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.756427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.756444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.756456 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.756468 | orchestrator | 2026-04-09 00:54:44.756480 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-09 00:54:44.756493 | orchestrator | Thursday 09 April 2026 00:49:27 +0000 (0:00:01.027) 0:01:01.483 ******** 2026-04-09 00:54:44.756505 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.756518 | orchestrator | 2026-04-09 00:54:44.756530 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-09 00:54:44.756542 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.762) 0:01:02.245 ******** 2026-04-09 00:54:44.756558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.756580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.756593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.756633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.756640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.756677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.756697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756727 | orchestrator | 2026-04-09 00:54:44.756739 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-09 00:54:44.756751 | orchestrator | Thursday 09 April 2026 00:49:32 +0000 (0:00:04.047) 0:01:06.293 ******** 2026-04-09 00:54:44.756764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.756784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.756797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756840 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.756856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.756864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.756876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756890 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.756897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.756904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.756918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.756937 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.757076 | orchestrator | 2026-04-09 00:54:44.757103 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-09 00:54:44.757112 | orchestrator | Thursday 09 April 2026 00:49:33 +0000 (0:00:00.813) 0:01:07.106 ******** 2026-04-09 00:54:44.757119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757135 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.757142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757156 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.757163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757177 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.757184 | orchestrator | 2026-04-09 00:54:44.757191 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-09 00:54:44.757197 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:01.241) 0:01:08.348 ******** 2026-04-09 00:54:44.757204 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.757211 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.757218 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.757225 | orchestrator | 2026-04-09 00:54:44.757231 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-09 00:54:44.757238 | orchestrator | Thursday 09 April 2026 00:49:35 +0000 (0:00:01.184) 0:01:09.533 ******** 2026-04-09 00:54:44.757245 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.757252 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.757258 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.757265 | orchestrator | 2026-04-09 00:54:44.757272 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-09 00:54:44.757278 | orchestrator | Thursday 09 April 2026 00:49:37 +0000 (0:00:02.107) 0:01:11.640 ******** 2026-04-09 00:54:44.757285 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.757292 | orchestrator | 2026-04-09 00:54:44.757299 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-09 00:54:44.757305 | orchestrator | Thursday 09 April 2026 00:49:38 +0000 (0:00:00.768) 0:01:12.410 ******** 2026-04-09 00:54:44.757330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.757349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.757373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.757392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757421 | orchestrator | 2026-04-09 00:54:44.757428 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-09 00:54:44.757435 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:04.392) 0:01:16.802 ******** 2026-04-09 00:54:44.757443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.757455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757477 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.757484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.757492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757506 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.757517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.757532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.757546 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.757553 | orchestrator | 2026-04-09 00:54:44.757560 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-09 00:54:44.757566 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:01.132) 0:01:17.934 ******** 2026-04-09 00:54:44.757574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757622 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.757629 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.757635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.757654 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.757661 | orchestrator | 2026-04-09 00:54:44.757667 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-09 00:54:44.757687 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.948) 0:01:18.882 ******** 2026-04-09 00:54:44.757695 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.757701 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.757708 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.757715 | orchestrator | 2026-04-09 00:54:44.757722 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-09 00:54:44.757729 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:01.395) 0:01:20.278 ******** 2026-04-09 00:54:44.757735 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.757742 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.757749 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.757755 | orchestrator | 2026-04-09 00:54:44.757762 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-09 00:54:44.757769 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:01.960) 0:01:22.239 ******** 2026-04-09 00:54:44.757776 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.757783 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.757789 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.757796 | orchestrator | 2026-04-09 00:54:44.757807 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-09 00:54:44.757814 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.255) 0:01:22.494 ******** 2026-04-09 00:54:44.757821 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.757828 | orchestrator | 2026-04-09 00:54:44.757837 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-09 00:54:44.757844 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:00.717) 0:01:23.212 ******** 2026-04-09 00:54:44.757852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:54:44.757860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:54:44.757867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:54:44.757878 | orchestrator | 2026-04-09 00:54:44.757885 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-09 00:54:44.757892 | orchestrator | Thursday 09 April 2026 00:49:54 +0000 (0:00:05.264) 0:01:28.477 ******** 2026-04-09 00:54:44.757899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:54:44.757906 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.757920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:54:44.757927 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.757935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:54:44.757957 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.757964 | orchestrator | 2026-04-09 00:54:44.757971 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-09 00:54:44.757978 | orchestrator | Thursday 09 April 2026 00:49:56 +0000 (0:00:01.746) 0:01:30.223 ******** 2026-04-09 00:54:44.757986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:54:44.757997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:54:44.758005 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.758053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:54:44.758063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:54:44.758070 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.758077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:54:44.758092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:54:44.758099 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.758106 | orchestrator | 2026-04-09 00:54:44.758113 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-09 00:54:44.758120 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:01.961) 0:01:32.184 ******** 2026-04-09 00:54:44.758126 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.758133 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.758140 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.758146 | orchestrator | 2026-04-09 00:54:44.758153 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-09 00:54:44.758160 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:00.380) 0:01:32.565 ******** 2026-04-09 00:54:44.758167 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.758173 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.758180 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.758187 | orchestrator | 2026-04-09 00:54:44.758194 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-09 00:54:44.758201 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:01.472) 0:01:34.038 ******** 2026-04-09 00:54:44.758208 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.758215 | orchestrator | 2026-04-09 00:54:44.758226 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-09 00:54:44.758232 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.820) 0:01:34.859 ******** 2026-04-09 00:54:44.758240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.758248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.758297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.758323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758359 | orchestrator | 2026-04-09 00:54:44.758365 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-09 00:54:44.758372 | orchestrator | Thursday 09 April 2026 00:50:04 +0000 (0:00:03.896) 0:01:38.755 ******** 2026-04-09 00:54:44.758379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.758387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758420 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.758428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.758435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758468 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.758475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.758483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758504 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.758511 | orchestrator | 2026-04-09 00:54:44.758518 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-09 00:54:44.758525 | orchestrator | Thursday 09 April 2026 00:50:05 +0000 (0:00:00.791) 0:01:39.547 ******** 2026-04-09 00:54:44.758532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.758543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.758557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.758565 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.758572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.758579 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.758586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.758593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.758600 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.758607 | orchestrator | 2026-04-09 00:54:44.758613 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-09 00:54:44.758620 | orchestrator | Thursday 09 April 2026 00:50:06 +0000 (0:00:00.798) 0:01:40.345 ******** 2026-04-09 00:54:44.758627 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.758634 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.758640 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.758647 | orchestrator | 2026-04-09 00:54:44.758654 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-09 00:54:44.758660 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:01.218) 0:01:41.563 ******** 2026-04-09 00:54:44.758667 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.758674 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.758680 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.758687 | orchestrator | 2026-04-09 00:54:44.758694 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-09 00:54:44.758700 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:02.027) 0:01:43.591 ******** 2026-04-09 00:54:44.758707 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.758714 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.758721 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.758728 | orchestrator | 2026-04-09 00:54:44.758734 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-09 00:54:44.758741 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:00.272) 0:01:43.864 ******** 2026-04-09 00:54:44.758747 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.758754 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.758761 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.758768 | orchestrator | 2026-04-09 00:54:44.758774 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-09 00:54:44.758781 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.606) 0:01:44.470 ******** 2026-04-09 00:54:44.758788 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.758794 | orchestrator | 2026-04-09 00:54:44.758801 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-09 00:54:44.758808 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:00.750) 0:01:45.220 ******** 2026-04-09 00:54:44.758815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.758833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:54:44.758841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.758895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:54:44.758902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.758972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:54:44.758980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.758998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759022 | orchestrator | 2026-04-09 00:54:44.759029 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-09 00:54:44.759036 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:04.500) 0:01:49.721 ******** 2026-04-09 00:54:44.759043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.759050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:54:44.759077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759121 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.759128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.759139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:54:44.759146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759189 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.759200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.759208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:54:44.759215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.759261 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.759268 | orchestrator | 2026-04-09 00:54:44.759275 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-09 00:54:44.759281 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:01.784) 0:01:51.506 ******** 2026-04-09 00:54:44.759288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.759296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.759303 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.759310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.759317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.759324 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.759330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.759341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.759348 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.759355 | orchestrator | 2026-04-09 00:54:44.759362 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-09 00:54:44.759371 | orchestrator | Thursday 09 April 2026 00:50:18 +0000 (0:00:01.014) 0:01:52.520 ******** 2026-04-09 00:54:44.759378 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.759385 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.759392 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.759398 | orchestrator | 2026-04-09 00:54:44.759405 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-09 00:54:44.759412 | orchestrator | Thursday 09 April 2026 00:50:19 +0000 (0:00:01.236) 0:01:53.756 ******** 2026-04-09 00:54:44.759419 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.759425 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.759432 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.759439 | orchestrator | 2026-04-09 00:54:44.759446 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-09 00:54:44.759452 | orchestrator | Thursday 09 April 2026 00:50:21 +0000 (0:00:01.948) 0:01:55.704 ******** 2026-04-09 00:54:44.759459 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.759473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.759480 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.759486 | orchestrator | 2026-04-09 00:54:44.759493 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-09 00:54:44.759500 | orchestrator | Thursday 09 April 2026 00:50:22 +0000 (0:00:00.272) 0:01:55.977 ******** 2026-04-09 00:54:44.759507 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.759513 | orchestrator | 2026-04-09 00:54:44.759520 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-09 00:54:44.759527 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:01.016) 0:01:56.994 ******** 2026-04-09 00:54:44.759535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:54:44.759551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.759563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:54:44.759579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.759594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:54:44.759605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.759613 | orchestrator | 2026-04-09 00:54:44.759620 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-09 00:54:44.759627 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:04.661) 0:02:01.656 ******** 2026-04-09 00:54:44.759637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:54:44.759649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.759657 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.759781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:54:44.759800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.759808 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.759824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:54:44.759836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.759843 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.759850 | orchestrator | 2026-04-09 00:54:44.759857 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-09 00:54:44.759864 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:03.348) 0:02:05.004 ******** 2026-04-09 00:54:44.759871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:54:44.759884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:54:44.759895 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.759903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:54:44.759910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:54:44.759917 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.759924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:54:44.759931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:54:44.759938 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.759960 | orchestrator | 2026-04-09 00:54:44.759967 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-09 00:54:44.759974 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:03.000) 0:02:08.004 ******** 2026-04-09 00:54:44.759981 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.759988 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.759995 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.760002 | orchestrator | 2026-04-09 00:54:44.760009 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-09 00:54:44.760015 | orchestrator | Thursday 09 April 2026 00:50:35 +0000 (0:00:01.203) 0:02:09.208 ******** 2026-04-09 00:54:44.760022 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.760029 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.760036 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.760042 | orchestrator | 2026-04-09 00:54:44.760049 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-09 00:54:44.760056 | orchestrator | Thursday 09 April 2026 00:50:37 +0000 (0:00:01.871) 0:02:11.079 ******** 2026-04-09 00:54:44.760063 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.760073 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.760081 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.760088 | orchestrator | 2026-04-09 00:54:44.760094 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-09 00:54:44.760101 | orchestrator | Thursday 09 April 2026 00:50:37 +0000 (0:00:00.242) 0:02:11.322 ******** 2026-04-09 00:54:44.760108 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.760115 | orchestrator | 2026-04-09 00:54:44.760122 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-09 00:54:44.760129 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:00.909) 0:02:12.232 ******** 2026-04-09 00:54:44.760143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.760151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.760159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.760166 | orchestrator | 2026-04-09 00:54:44.760173 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-09 00:54:44.760179 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:03.674) 0:02:15.907 ******** 2026-04-09 00:54:44.760187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.760198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.760205 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.760212 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.760226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.760233 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.760240 | orchestrator | 2026-04-09 00:54:44.760247 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-09 00:54:44.760254 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:00.358) 0:02:16.265 ******** 2026-04-09 00:54:44.760261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.760268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.760294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.760301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.760308 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.760315 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.760322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.760329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.760336 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.760343 | orchestrator | 2026-04-09 00:54:44.760349 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-09 00:54:44.760356 | orchestrator | Thursday 09 April 2026 00:50:43 +0000 (0:00:00.890) 0:02:17.155 ******** 2026-04-09 00:54:44.760367 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.760374 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.760381 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.760389 | orchestrator | 2026-04-09 00:54:44.760396 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-09 00:54:44.760404 | orchestrator | Thursday 09 April 2026 00:50:44 +0000 (0:00:01.281) 0:02:18.437 ******** 2026-04-09 00:54:44.760412 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.760420 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.760428 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.760435 | orchestrator | 2026-04-09 00:54:44.760443 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-09 00:54:44.760450 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:02.187) 0:02:20.625 ******** 2026-04-09 00:54:44.760458 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.760466 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.760473 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.760481 | orchestrator | 2026-04-09 00:54:44.760489 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-09 00:54:44.760497 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.305) 0:02:20.931 ******** 2026-04-09 00:54:44.760505 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.760513 | orchestrator | 2026-04-09 00:54:44.760521 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-09 00:54:44.760528 | orchestrator | Thursday 09 April 2026 00:50:48 +0000 (0:00:01.181) 0:02:22.112 ******** 2026-04-09 00:54:44.760546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:54:44.760557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:54:44.760581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:54:44.760595 | orchestrator | 2026-04-09 00:54:44.760603 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-09 00:54:44.760610 | orchestrator | Thursday 09 April 2026 00:50:52 +0000 (0:00:04.768) 0:02:26.880 ******** 2026-04-09 00:54:44.760623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:54:44.760633 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.760646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:54:44.760658 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.760676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:54:44.760685 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.760694 | orchestrator | 2026-04-09 00:54:44.760700 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-09 00:54:44.760707 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:00.643) 0:02:27.523 ******** 2026-04-09 00:54:44.760714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:54:44.760722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:54:44.760733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:54:44.760740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:54:44.760747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:54:44.760754 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.760761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:54:44.760768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:54:44.760775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:54:44.760782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:54:44.760789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:54:44.760796 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.760810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:54:44.760821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:54:44.760828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:54:44.760839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:54:44.760846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:54:44.760852 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.760859 | orchestrator | 2026-04-09 00:54:44.760866 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-09 00:54:44.760873 | orchestrator | Thursday 09 April 2026 00:50:55 +0000 (0:00:01.443) 0:02:28.967 ******** 2026-04-09 00:54:44.760880 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.760886 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.760893 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.760900 | orchestrator | 2026-04-09 00:54:44.760906 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-09 00:54:44.760913 | orchestrator | Thursday 09 April 2026 00:50:56 +0000 (0:00:01.304) 0:02:30.272 ******** 2026-04-09 00:54:44.760920 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.760926 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.760933 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.760940 | orchestrator | 2026-04-09 00:54:44.760988 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-09 00:54:44.760995 | orchestrator | Thursday 09 April 2026 00:50:58 +0000 (0:00:01.978) 0:02:32.250 ******** 2026-04-09 00:54:44.761002 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.761009 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.761015 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.761022 | orchestrator | 2026-04-09 00:54:44.761029 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-09 00:54:44.761036 | orchestrator | Thursday 09 April 2026 00:50:58 +0000 (0:00:00.257) 0:02:32.507 ******** 2026-04-09 00:54:44.761042 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.761049 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.761056 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.761062 | orchestrator | 2026-04-09 00:54:44.761069 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-09 00:54:44.761076 | orchestrator | Thursday 09 April 2026 00:50:58 +0000 (0:00:00.285) 0:02:32.793 ******** 2026-04-09 00:54:44.761083 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.761089 | orchestrator | 2026-04-09 00:54:44.761096 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-09 00:54:44.761103 | orchestrator | Thursday 09 April 2026 00:50:59 +0000 (0:00:01.060) 0:02:33.854 ******** 2026-04-09 00:54:44.761110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:54:44.761130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:54:44.761138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:54:44.761146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:54:44.761153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:54:44.761161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:54:44.761174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:54:44.761186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:54:44.761193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:54:44.761200 | orchestrator | 2026-04-09 00:54:44.761207 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-09 00:54:44.761214 | orchestrator | Thursday 09 April 2026 00:51:04 +0000 (0:00:04.222) 0:02:38.076 ******** 2026-04-09 00:54:44.761221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:54:44.761229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:54:44.761236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:54:44.761246 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.761261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:54:44.761269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:54:44.761276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:54:44.761283 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.761291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:54:44.761304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:54:44.761328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:54:44.761342 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.761349 | orchestrator | 2026-04-09 00:54:44.761356 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-09 00:54:44.761362 | orchestrator | Thursday 09 April 2026 00:51:04 +0000 (0:00:00.602) 0:02:38.678 ******** 2026-04-09 00:54:44.761369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:54:44.761377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:54:44.761384 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.761391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:54:44.761398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:54:44.761405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:54:44.761412 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.761419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:54:44.761426 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.761433 | orchestrator | 2026-04-09 00:54:44.761439 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-09 00:54:44.761446 | orchestrator | Thursday 09 April 2026 00:51:05 +0000 (0:00:01.233) 0:02:39.911 ******** 2026-04-09 00:54:44.761453 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.761464 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.761470 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.761477 | orchestrator | 2026-04-09 00:54:44.761484 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-09 00:54:44.761490 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:01.235) 0:02:41.147 ******** 2026-04-09 00:54:44.761497 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.761503 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.761510 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.761517 | orchestrator | 2026-04-09 00:54:44.761523 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-09 00:54:44.761530 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:02.060) 0:02:43.207 ******** 2026-04-09 00:54:44.761537 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.761543 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.761549 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.761555 | orchestrator | 2026-04-09 00:54:44.761561 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-09 00:54:44.761567 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.329) 0:02:43.537 ******** 2026-04-09 00:54:44.761574 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.761580 | orchestrator | 2026-04-09 00:54:44.761586 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-09 00:54:44.761592 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:02.042) 0:02:45.580 ******** 2026-04-09 00:54:44.761605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.761613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.761631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.761651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761657 | orchestrator | 2026-04-09 00:54:44.761664 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-09 00:54:44.761670 | orchestrator | Thursday 09 April 2026 00:51:15 +0000 (0:00:03.848) 0:02:49.429 ******** 2026-04-09 00:54:44.761677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.761687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761694 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.761701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.761713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761720 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.761727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.761734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761743 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.761750 | orchestrator | 2026-04-09 00:54:44.761756 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-09 00:54:44.761762 | orchestrator | Thursday 09 April 2026 00:51:16 +0000 (0:00:00.612) 0:02:50.041 ******** 2026-04-09 00:54:44.761769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.761775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.761782 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.761788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.761795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.761801 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.761807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.761814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.761820 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.761827 | orchestrator | 2026-04-09 00:54:44.761836 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-09 00:54:44.761842 | orchestrator | Thursday 09 April 2026 00:51:17 +0000 (0:00:00.930) 0:02:50.971 ******** 2026-04-09 00:54:44.761849 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.761855 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.761861 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.761867 | orchestrator | 2026-04-09 00:54:44.761874 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-09 00:54:44.761880 | orchestrator | Thursday 09 April 2026 00:51:18 +0000 (0:00:01.213) 0:02:52.185 ******** 2026-04-09 00:54:44.761886 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.761892 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.761899 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.761905 | orchestrator | 2026-04-09 00:54:44.761911 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-09 00:54:44.761917 | orchestrator | Thursday 09 April 2026 00:51:20 +0000 (0:00:02.077) 0:02:54.263 ******** 2026-04-09 00:54:44.761924 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.761930 | orchestrator | 2026-04-09 00:54:44.761940 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-09 00:54:44.761956 | orchestrator | Thursday 09 April 2026 00:51:21 +0000 (0:00:01.179) 0:02:55.443 ******** 2026-04-09 00:54:44.761963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.761984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.761997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.762044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.762065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762103 | orchestrator | 2026-04-09 00:54:44.762109 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-09 00:54:44.762116 | orchestrator | Thursday 09 April 2026 00:51:25 +0000 (0:00:03.535) 0:02:58.978 ******** 2026-04-09 00:54:44.762123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.762129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762153 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.762175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762195 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.762201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.762218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.762242 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.762248 | orchestrator | 2026-04-09 00:54:44.762254 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-09 00:54:44.762261 | orchestrator | Thursday 09 April 2026 00:51:25 +0000 (0:00:00.537) 0:02:59.515 ******** 2026-04-09 00:54:44.762267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.762274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.762280 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.762293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.762300 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.762306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.762312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.762319 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.762325 | orchestrator | 2026-04-09 00:54:44.762332 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-09 00:54:44.762342 | orchestrator | Thursday 09 April 2026 00:51:26 +0000 (0:00:00.878) 0:03:00.394 ******** 2026-04-09 00:54:44.762348 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.762355 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.762361 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.762367 | orchestrator | 2026-04-09 00:54:44.762373 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-09 00:54:44.762380 | orchestrator | Thursday 09 April 2026 00:51:27 +0000 (0:00:01.226) 0:03:01.620 ******** 2026-04-09 00:54:44.762386 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.762392 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.762398 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.762405 | orchestrator | 2026-04-09 00:54:44.762411 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-09 00:54:44.762421 | orchestrator | Thursday 09 April 2026 00:51:29 +0000 (0:00:01.944) 0:03:03.565 ******** 2026-04-09 00:54:44.762427 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.762434 | orchestrator | 2026-04-09 00:54:44.762440 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-09 00:54:44.762449 | orchestrator | Thursday 09 April 2026 00:51:30 +0000 (0:00:01.021) 0:03:04.586 ******** 2026-04-09 00:54:44.762455 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:54:44.762462 | orchestrator | 2026-04-09 00:54:44.762468 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-09 00:54:44.762474 | orchestrator | Thursday 09 April 2026 00:51:34 +0000 (0:00:03.489) 0:03:08.076 ******** 2026-04-09 00:54:44.762481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:54:44.762489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:54:44.762499 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.762513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:54:44.762520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:54:44.762527 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:54:44.762545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:54:44.762552 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.762558 | orchestrator | 2026-04-09 00:54:44.762564 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-09 00:54:44.762570 | orchestrator | Thursday 09 April 2026 00:51:37 +0000 (0:00:03.642) 0:03:11.719 ******** 2026-04-09 00:54:44.762584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:54:44.762592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:54:44.762611 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.762622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:54:44.762636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:54:44.762643 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.762650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:54:44.762660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:54:44.762667 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762673 | orchestrator | 2026-04-09 00:54:44.762680 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-09 00:54:44.762686 | orchestrator | Thursday 09 April 2026 00:51:39 +0000 (0:00:01.629) 0:03:13.348 ******** 2026-04-09 00:54:44.762692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:54:44.762706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:54:44.762713 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:54:44.762726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:54:44.762733 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.762739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:54:44.762749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:54:44.762756 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.762762 | orchestrator | 2026-04-09 00:54:44.762768 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-09 00:54:44.762774 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:02.058) 0:03:15.406 ******** 2026-04-09 00:54:44.762781 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.762787 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.762793 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.762799 | orchestrator | 2026-04-09 00:54:44.762806 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-09 00:54:44.762812 | orchestrator | Thursday 09 April 2026 00:51:43 +0000 (0:00:02.006) 0:03:17.413 ******** 2026-04-09 00:54:44.762818 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762824 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.762831 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.762837 | orchestrator | 2026-04-09 00:54:44.762843 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-09 00:54:44.762849 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:01.137) 0:03:18.550 ******** 2026-04-09 00:54:44.762855 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762861 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.762868 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.762874 | orchestrator | 2026-04-09 00:54:44.762880 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-09 00:54:44.762886 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.844) 0:03:19.394 ******** 2026-04-09 00:54:44.762892 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.762898 | orchestrator | 2026-04-09 00:54:44.762905 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-09 00:54:44.762914 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:01.218) 0:03:20.613 ******** 2026-04-09 00:54:44.762926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:54:44.762933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:54:44.762954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:54:44.762961 | orchestrator | 2026-04-09 00:54:44.762968 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-09 00:54:44.762974 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:02.087) 0:03:22.701 ******** 2026-04-09 00:54:44.762980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:54:44.762987 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.762993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:54:44.763000 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.763013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:54:44.763020 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.763026 | orchestrator | 2026-04-09 00:54:44.763032 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-09 00:54:44.763042 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.350) 0:03:23.051 ******** 2026-04-09 00:54:44.763049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:54:44.763055 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.763062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:54:44.763068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.763075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:54:44.763081 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.763087 | orchestrator | 2026-04-09 00:54:44.763093 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-09 00:54:44.763099 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.470) 0:03:23.522 ******** 2026-04-09 00:54:44.763106 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.763112 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.763118 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.763124 | orchestrator | 2026-04-09 00:54:44.763130 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-09 00:54:44.763137 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.406) 0:03:23.929 ******** 2026-04-09 00:54:44.763143 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.763149 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.763155 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.763161 | orchestrator | 2026-04-09 00:54:44.763168 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-09 00:54:44.763174 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:01.716) 0:03:25.645 ******** 2026-04-09 00:54:44.763180 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.763186 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.763192 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.763199 | orchestrator | 2026-04-09 00:54:44.763205 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-09 00:54:44.763211 | orchestrator | Thursday 09 April 2026 00:51:52 +0000 (0:00:00.406) 0:03:26.052 ******** 2026-04-09 00:54:44.763217 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.763223 | orchestrator | 2026-04-09 00:54:44.763229 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-09 00:54:44.763236 | orchestrator | Thursday 09 April 2026 00:51:53 +0000 (0:00:01.092) 0:03:27.144 ******** 2026-04-09 00:54:44.763242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.763260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:54:44.763274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:54:44.763281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:54:44.763321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:54:44.763341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.763358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.763386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:54:44.763403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:54:44.763417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:54:44.763452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.763468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:54:44.763482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:54:44.763514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:54:44.763528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.763538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:54:44.763579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:54:44.763605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value'2026-04-09 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:44.763697 | orchestrator | : {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.763722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763733 | orchestrator | 2026-04-09 00:54:44.763739 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-09 00:54:44.763746 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:04.445) 0:03:31.590 ******** 2026-04-09 00:54:44.763752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.763767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:54:44.763780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:54:44.763790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:54:44.763823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:54:44.763847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.763875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.763881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.763904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:54:44.763911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:54:44.763918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.763928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.763953 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.763960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:54:44.763980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.763987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:54:44.763999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:54:44.764016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.764025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.764032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.764039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:54:44.764048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.764062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:54:44.764074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:54:44.764087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.764097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:54:44.764111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.764131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:54:44.764141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.764148 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.764154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:54:44.764161 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.764167 | orchestrator | 2026-04-09 00:54:44.764174 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-09 00:54:44.764180 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:01.981) 0:03:33.571 ******** 2026-04-09 00:54:44.764186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.764193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.764200 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.764206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.764213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.764219 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.764228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.764235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.764241 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.764248 | orchestrator | 2026-04-09 00:54:44.764254 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-09 00:54:44.764260 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:02.157) 0:03:35.729 ******** 2026-04-09 00:54:44.764266 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.764273 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.764282 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.764289 | orchestrator | 2026-04-09 00:54:44.764295 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-09 00:54:44.764301 | orchestrator | Thursday 09 April 2026 00:52:03 +0000 (0:00:01.422) 0:03:37.152 ******** 2026-04-09 00:54:44.764307 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.764318 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.764325 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.764333 | orchestrator | 2026-04-09 00:54:44.764340 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-09 00:54:44.764347 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:02.030) 0:03:39.182 ******** 2026-04-09 00:54:44.764355 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.764362 | orchestrator | 2026-04-09 00:54:44.764369 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-09 00:54:44.764376 | orchestrator | Thursday 09 April 2026 00:52:06 +0000 (0:00:01.329) 0:03:40.512 ******** 2026-04-09 00:54:44.764433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:54:44.764449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:54:44.764466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:54:44.764479 | orchestrator | 2026-04-09 00:54:44.764486 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-09 00:54:44.764494 | orchestrator | Thursday 09 April 2026 00:52:10 +0000 (0:00:03.428) 0:03:43.940 ******** 2026-04-09 00:54:44.764501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:54:44.764509 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.764518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:54:44.764525 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.764533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:54:44.764541 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.764548 | orchestrator | 2026-04-09 00:54:44.764556 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-09 00:54:44.764563 | orchestrator | Thursday 09 April 2026 00:52:10 +0000 (0:00:00.777) 0:03:44.718 ******** 2026-04-09 00:54:44.764577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.764589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.764597 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.764604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.764612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.764619 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.764626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.764634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.764641 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.764648 | orchestrator | 2026-04-09 00:54:44.764656 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-09 00:54:44.764664 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:00.698) 0:03:45.416 ******** 2026-04-09 00:54:44.764671 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.764678 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.764684 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.764690 | orchestrator | 2026-04-09 00:54:44.764697 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-09 00:54:44.764708 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:01.204) 0:03:46.621 ******** 2026-04-09 00:54:44.764719 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.764730 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.764741 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.764750 | orchestrator | 2026-04-09 00:54:44.764760 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-09 00:54:44.764768 | orchestrator | Thursday 09 April 2026 00:52:14 +0000 (0:00:01.912) 0:03:48.533 ******** 2026-04-09 00:54:44.764780 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.764791 | orchestrator | 2026-04-09 00:54:44.764802 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-09 00:54:44.764811 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:01.397) 0:03:49.931 ******** 2026-04-09 00:54:44.764822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.764851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.764861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.764868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.764876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.764909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.764933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.764973 | orchestrator | 2026-04-09 00:54:44.764983 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-09 00:54:44.764989 | orchestrator | Thursday 09 April 2026 00:52:22 +0000 (0:00:06.094) 0:03:56.025 ******** 2026-04-09 00:54:44.764996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.765003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.765010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.765021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.765028 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.765042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.765049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.765056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.765063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.765073 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.765080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.765186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.765197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.765204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.765211 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.765222 | orchestrator | 2026-04-09 00:54:44.765229 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-09 00:54:44.765235 | orchestrator | Thursday 09 April 2026 00:52:22 +0000 (0:00:00.664) 0:03:56.689 ******** 2026-04-09 00:54:44.765242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765289 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.765305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765365 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.765374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.765412 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.765422 | orchestrator | 2026-04-09 00:54:44.765431 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-09 00:54:44.765447 | orchestrator | Thursday 09 April 2026 00:52:24 +0000 (0:00:01.285) 0:03:57.975 ******** 2026-04-09 00:54:44.765457 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.765467 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.765476 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.765486 | orchestrator | 2026-04-09 00:54:44.765496 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-09 00:54:44.765506 | orchestrator | Thursday 09 April 2026 00:52:25 +0000 (0:00:01.131) 0:03:59.107 ******** 2026-04-09 00:54:44.765515 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.765526 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.765537 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.765548 | orchestrator | 2026-04-09 00:54:44.765556 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-09 00:54:44.765563 | orchestrator | Thursday 09 April 2026 00:52:27 +0000 (0:00:02.157) 0:04:01.264 ******** 2026-04-09 00:54:44.765569 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.765576 | orchestrator | 2026-04-09 00:54:44.765582 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-09 00:54:44.765588 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:01.416) 0:04:02.680 ******** 2026-04-09 00:54:44.765594 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-09 00:54:44.765601 | orchestrator | 2026-04-09 00:54:44.765607 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-09 00:54:44.765613 | orchestrator | Thursday 09 April 2026 00:52:29 +0000 (0:00:00.750) 0:04:03.431 ******** 2026-04-09 00:54:44.765620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:54:44.765627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:54:44.765676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:54:44.765685 | orchestrator | 2026-04-09 00:54:44.765691 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-09 00:54:44.765697 | orchestrator | Thursday 09 April 2026 00:52:33 +0000 (0:00:04.232) 0:04:07.663 ******** 2026-04-09 00:54:44.765704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.765715 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.765722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.765728 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.765735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.765741 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.765747 | orchestrator | 2026-04-09 00:54:44.765754 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-09 00:54:44.765760 | orchestrator | Thursday 09 April 2026 00:52:35 +0000 (0:00:01.630) 0:04:09.294 ******** 2026-04-09 00:54:44.765767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:54:44.765774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:54:44.765780 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.765787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:54:44.765795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:54:44.765803 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.765810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:54:44.765818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:54:44.765825 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.765832 | orchestrator | 2026-04-09 00:54:44.765840 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:54:44.765847 | orchestrator | Thursday 09 April 2026 00:52:37 +0000 (0:00:01.993) 0:04:11.287 ******** 2026-04-09 00:54:44.765854 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.765861 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.765869 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.765876 | orchestrator | 2026-04-09 00:54:44.765898 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:54:44.765912 | orchestrator | Thursday 09 April 2026 00:52:39 +0000 (0:00:02.416) 0:04:13.704 ******** 2026-04-09 00:54:44.765922 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.765929 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.765937 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.765960 | orchestrator | 2026-04-09 00:54:44.765967 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-09 00:54:44.765974 | orchestrator | Thursday 09 April 2026 00:52:42 +0000 (0:00:03.172) 0:04:16.877 ******** 2026-04-09 00:54:44.765982 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-spicehtml5proxy) 2026-04-09 00:54:44.765989 | orchestrator | 2026-04-09 00:54:44.765996 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-09 00:54:44.766003 | orchestrator | Thursday 09 April 2026 00:52:43 +0000 (0:00:00.799) 0:04:17.676 ******** 2026-04-09 00:54:44.766010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.766041 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.766056 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.766063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.766071 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.766077 | orchestrator | 2026-04-09 00:54:44.766083 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-09 00:54:44.766090 | orchestrator | Thursday 09 April 2026 00:52:45 +0000 (0:00:01.504) 0:04:19.180 ******** 2026-04-09 00:54:44.766096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.766103 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.766121 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.766148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:54:44.766155 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.766162 | orchestrator | 2026-04-09 00:54:44.766168 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-09 00:54:44.766174 | orchestrator | Thursday 09 April 2026 00:52:46 +0000 (0:00:01.258) 0:04:20.439 ******** 2026-04-09 00:54:44.766181 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.766187 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.766193 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766199 | orchestrator | 2026-04-09 00:54:44.766205 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:54:44.766212 | orchestrator | Thursday 09 April 2026 00:52:48 +0000 (0:00:01.531) 0:04:21.971 ******** 2026-04-09 00:54:44.766218 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.766224 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.766231 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.766237 | orchestrator | 2026-04-09 00:54:44.766243 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:54:44.766249 | orchestrator | Thursday 09 April 2026 00:52:50 +0000 (0:00:02.707) 0:04:24.678 ******** 2026-04-09 00:54:44.766255 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.766262 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.766268 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.766274 | orchestrator | 2026-04-09 00:54:44.766280 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-09 00:54:44.766286 | orchestrator | Thursday 09 April 2026 00:52:53 +0000 (0:00:03.251) 0:04:27.930 ******** 2026-04-09 00:54:44.766292 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-09 00:54:44.766299 | orchestrator | 2026-04-09 00:54:44.766305 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-09 00:54:44.766311 | orchestrator | Thursday 09 April 2026 00:52:55 +0000 (0:00:01.170) 0:04:29.100 ******** 2026-04-09 00:54:44.766318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:54:44.766324 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:54:44.766341 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.766348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:54:44.766354 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.766360 | orchestrator | 2026-04-09 00:54:44.766366 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-09 00:54:44.766373 | orchestrator | Thursday 09 April 2026 00:52:56 +0000 (0:00:01.235) 0:04:30.336 ******** 2026-04-09 00:54:44.766379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:54:44.766386 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.766411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:54:44.766418 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:54:44.766431 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.766438 | orchestrator | 2026-04-09 00:54:44.766444 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-09 00:54:44.766450 | orchestrator | Thursday 09 April 2026 00:52:58 +0000 (0:00:01.697) 0:04:32.034 ******** 2026-04-09 00:54:44.766456 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.766463 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.766469 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766475 | orchestrator | 2026-04-09 00:54:44.766482 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:54:44.766488 | orchestrator | Thursday 09 April 2026 00:52:59 +0000 (0:00:01.799) 0:04:33.833 ******** 2026-04-09 00:54:44.766494 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.766500 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.766507 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.766513 | orchestrator | 2026-04-09 00:54:44.766519 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:54:44.766525 | orchestrator | Thursday 09 April 2026 00:53:02 +0000 (0:00:02.519) 0:04:36.353 ******** 2026-04-09 00:54:44.766535 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.766541 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.766547 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.766553 | orchestrator | 2026-04-09 00:54:44.766560 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-09 00:54:44.766566 | orchestrator | Thursday 09 April 2026 00:53:05 +0000 (0:00:02.777) 0:04:39.130 ******** 2026-04-09 00:54:44.766572 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.766578 | orchestrator | 2026-04-09 00:54:44.766585 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-09 00:54:44.766591 | orchestrator | Thursday 09 April 2026 00:53:06 +0000 (0:00:01.311) 0:04:40.442 ******** 2026-04-09 00:54:44.766598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:54:44.766605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:54:44.766629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.766654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:54:44.766660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:54:44.766667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.766706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:54:44.766716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:54:44.766723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.766756 | orchestrator | 2026-04-09 00:54:44.766763 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-09 00:54:44.766772 | orchestrator | Thursday 09 April 2026 00:53:10 +0000 (0:00:03.963) 0:04:44.406 ******** 2026-04-09 00:54:44.766778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:54:44.766789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:54:44.766796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:54:44.766832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.766839 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:54:44.766856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.766876 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.766883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:54:44.766906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:54:44.766913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:54:44.766930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:54:44.766937 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.766954 | orchestrator | 2026-04-09 00:54:44.766961 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-09 00:54:44.766967 | orchestrator | Thursday 09 April 2026 00:53:11 +0000 (0:00:00.688) 0:04:45.094 ******** 2026-04-09 00:54:44.766974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:54:44.766980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:54:44.766987 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.766993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:54:44.766999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:54:44.767006 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.767012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:54:44.767018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:54:44.767025 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.767031 | orchestrator | 2026-04-09 00:54:44.767037 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-09 00:54:44.767043 | orchestrator | Thursday 09 April 2026 00:53:11 +0000 (0:00:00.804) 0:04:45.899 ******** 2026-04-09 00:54:44.767050 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.767056 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.767062 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.767072 | orchestrator | 2026-04-09 00:54:44.767078 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-09 00:54:44.767100 | orchestrator | Thursday 09 April 2026 00:53:13 +0000 (0:00:01.431) 0:04:47.331 ******** 2026-04-09 00:54:44.767108 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.767114 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.767123 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.767129 | orchestrator | 2026-04-09 00:54:44.767135 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-09 00:54:44.767142 | orchestrator | Thursday 09 April 2026 00:53:15 +0000 (0:00:01.982) 0:04:49.313 ******** 2026-04-09 00:54:44.767149 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.767155 | orchestrator | 2026-04-09 00:54:44.767161 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-09 00:54:44.767167 | orchestrator | Thursday 09 April 2026 00:53:16 +0000 (0:00:01.245) 0:04:50.559 ******** 2026-04-09 00:54:44.767174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.767182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.767189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.767212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:54:44.767225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:54:44.767233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:54:44.767239 | orchestrator | 2026-04-09 00:54:44.767246 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-09 00:54:44.767252 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:06.105) 0:04:56.664 ******** 2026-04-09 00:54:44.767259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.767287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:54:44.767295 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.767302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.767309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:54:44.767316 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.767322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.767352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:54:44.767361 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.767367 | orchestrator | 2026-04-09 00:54:44.767374 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-09 00:54:44.767380 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:01.223) 0:04:57.887 ******** 2026-04-09 00:54:44.767386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.767393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:54:44.767400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:54:44.767407 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.767413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.767420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:54:44.767426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:54:44.767433 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.767443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.767449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:54:44.767456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:54:44.767462 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.767469 | orchestrator | 2026-04-09 00:54:44.767475 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-09 00:54:44.767481 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:01.162) 0:04:59.050 ******** 2026-04-09 00:54:44.767488 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.767494 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.767500 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.767506 | orchestrator | 2026-04-09 00:54:44.767513 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-09 00:54:44.767532 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:00.495) 0:04:59.546 ******** 2026-04-09 00:54:44.767539 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.767546 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.767555 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.767561 | orchestrator | 2026-04-09 00:54:44.767567 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-09 00:54:44.767574 | orchestrator | Thursday 09 April 2026 00:53:27 +0000 (0:00:01.553) 0:05:01.099 ******** 2026-04-09 00:54:44.767580 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.767587 | orchestrator | 2026-04-09 00:54:44.767593 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-09 00:54:44.767599 | orchestrator | Thursday 09 April 2026 00:53:28 +0000 (0:00:01.702) 0:05:02.802 ******** 2026-04-09 00:54:44.767606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:54:44.767613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:54:44.767624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.767661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:54:44.767669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:54:44.767676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:54:44.767700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.767722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:54:44.767730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.767754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.767761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:54:44.767784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.767805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.767816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:54:44.767840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:54:44.767848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:54:44.767854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.767905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.767913 | orchestrator | 2026-04-09 00:54:44.767922 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-09 00:54:44.767928 | orchestrator | Thursday 09 April 2026 00:53:33 +0000 (0:00:04.350) 0:05:07.152 ******** 2026-04-09 00:54:44.767935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:54:44.767967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:54:44.767974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.767988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.768013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.768021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:54:44.768034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.768054 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:54:44.768074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:54:44.768081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.768106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.768113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:54:44.768127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.768150 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:54:44.768171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:54:44.768182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.768236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:54:44.768248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:54:44.768259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:54:44.768280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:54:44.768290 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768301 | orchestrator | 2026-04-09 00:54:44.768308 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-09 00:54:44.768318 | orchestrator | Thursday 09 April 2026 00:53:34 +0000 (0:00:01.523) 0:05:08.676 ******** 2026-04-09 00:54:44.768369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:54:44.768387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:54:44.768395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.768402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.768408 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:54:44.768421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:54:44.768428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.768435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.768441 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:54:44.768454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:54:44.768461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.768478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:54:44.768485 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768492 | orchestrator | 2026-04-09 00:54:44.768498 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-09 00:54:44.768504 | orchestrator | Thursday 09 April 2026 00:53:35 +0000 (0:00:01.040) 0:05:09.716 ******** 2026-04-09 00:54:44.768511 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768517 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768523 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768530 | orchestrator | 2026-04-09 00:54:44.768536 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-09 00:54:44.768542 | orchestrator | Thursday 09 April 2026 00:53:36 +0000 (0:00:00.472) 0:05:10.188 ******** 2026-04-09 00:54:44.768549 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768555 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768561 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768567 | orchestrator | 2026-04-09 00:54:44.768573 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-09 00:54:44.768580 | orchestrator | Thursday 09 April 2026 00:53:37 +0000 (0:00:01.276) 0:05:11.465 ******** 2026-04-09 00:54:44.768586 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.768592 | orchestrator | 2026-04-09 00:54:44.768599 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-09 00:54:44.768605 | orchestrator | Thursday 09 April 2026 00:53:39 +0000 (0:00:01.677) 0:05:13.143 ******** 2026-04-09 00:54:44.768612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:54:44.768619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:54:44.768665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:54:44.768673 | orchestrator | 2026-04-09 00:54:44.768679 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-09 00:54:44.768686 | orchestrator | Thursday 09 April 2026 00:53:41 +0000 (0:00:02.433) 0:05:15.577 ******** 2026-04-09 00:54:44.768693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:54:44.768700 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:54:44.768713 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:54:44.768731 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768738 | orchestrator | 2026-04-09 00:54:44.768744 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-09 00:54:44.768750 | orchestrator | Thursday 09 April 2026 00:53:42 +0000 (0:00:00.386) 0:05:15.964 ******** 2026-04-09 00:54:44.768757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:54:44.768764 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:54:44.768776 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:54:44.768793 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768799 | orchestrator | 2026-04-09 00:54:44.768808 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-09 00:54:44.768814 | orchestrator | Thursday 09 April 2026 00:53:42 +0000 (0:00:00.961) 0:05:16.925 ******** 2026-04-09 00:54:44.768820 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768827 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768833 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768839 | orchestrator | 2026-04-09 00:54:44.768846 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-09 00:54:44.768852 | orchestrator | Thursday 09 April 2026 00:53:43 +0000 (0:00:00.429) 0:05:17.354 ******** 2026-04-09 00:54:44.768858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.768864 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.768871 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.768877 | orchestrator | 2026-04-09 00:54:44.768884 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-09 00:54:44.768890 | orchestrator | Thursday 09 April 2026 00:53:44 +0000 (0:00:01.349) 0:05:18.704 ******** 2026-04-09 00:54:44.768896 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.768903 | orchestrator | 2026-04-09 00:54:44.768909 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-09 00:54:44.768915 | orchestrator | Thursday 09 April 2026 00:53:46 +0000 (0:00:01.711) 0:05:20.415 ******** 2026-04-09 00:54:44.768922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 00:54:44.768934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 00:54:44.768987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 00:54:44.769000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:54:44.769007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:54:44.769020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:54:44.769027 | orchestrator | 2026-04-09 00:54:44.769033 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-09 00:54:44.769039 | orchestrator | Thursday 09 April 2026 00:53:52 +0000 (0:00:05.920) 0:05:26.336 ******** 2026-04-09 00:54:44.769052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 00:54:44.769060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:54:44.769067 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 00:54:44.769084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:54:44.769091 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 00:54:44.769113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:54:44.769120 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769126 | orchestrator | 2026-04-09 00:54:44.769133 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-09 00:54:44.769139 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:00.625) 0:05:26.962 ******** 2026-04-09 00:54:44.769149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:54:44.769161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:54:44.769172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.769183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.769193 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:54:44.769216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:54:44.769226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.769235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:54:44.769242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.769256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:54:44.769262 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.769275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:54:44.769282 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769288 | orchestrator | 2026-04-09 00:54:44.769294 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-09 00:54:44.769300 | orchestrator | Thursday 09 April 2026 00:53:54 +0000 (0:00:01.074) 0:05:28.036 ******** 2026-04-09 00:54:44.769311 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.769317 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.769324 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.769330 | orchestrator | 2026-04-09 00:54:44.769336 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-09 00:54:44.769343 | orchestrator | Thursday 09 April 2026 00:53:55 +0000 (0:00:01.658) 0:05:29.694 ******** 2026-04-09 00:54:44.769349 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.769355 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.769361 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.769368 | orchestrator | 2026-04-09 00:54:44.769374 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-09 00:54:44.769380 | orchestrator | Thursday 09 April 2026 00:53:57 +0000 (0:00:01.983) 0:05:31.678 ******** 2026-04-09 00:54:44.769387 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769393 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769399 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769406 | orchestrator | 2026-04-09 00:54:44.769412 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-09 00:54:44.769418 | orchestrator | Thursday 09 April 2026 00:53:58 +0000 (0:00:00.286) 0:05:31.965 ******** 2026-04-09 00:54:44.769424 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769430 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769437 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769443 | orchestrator | 2026-04-09 00:54:44.769449 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-09 00:54:44.769455 | orchestrator | Thursday 09 April 2026 00:53:58 +0000 (0:00:00.279) 0:05:32.244 ******** 2026-04-09 00:54:44.769462 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769468 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769474 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769480 | orchestrator | 2026-04-09 00:54:44.769486 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-09 00:54:44.769493 | orchestrator | Thursday 09 April 2026 00:53:58 +0000 (0:00:00.268) 0:05:32.513 ******** 2026-04-09 00:54:44.769499 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769505 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769511 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769516 | orchestrator | 2026-04-09 00:54:44.769522 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-09 00:54:44.769527 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:00.472) 0:05:32.986 ******** 2026-04-09 00:54:44.769533 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769538 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769543 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769549 | orchestrator | 2026-04-09 00:54:44.769554 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-09 00:54:44.769560 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:00.269) 0:05:33.255 ******** 2026-04-09 00:54:44.769565 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:44.769571 | orchestrator | 2026-04-09 00:54:44.769576 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-09 00:54:44.769582 | orchestrator | Thursday 09 April 2026 00:54:00 +0000 (0:00:01.575) 0:05:34.831 ******** 2026-04-09 00:54:44.769588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.769602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.769608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:54:44.769614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.769620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.769626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:54:44.769632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.769638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.769652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:54:44.769659 | orchestrator | 2026-04-09 00:54:44.769664 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-09 00:54:44.769670 | orchestrator | Thursday 09 April 2026 00:54:03 +0000 (0:00:02.495) 0:05:37.327 ******** 2026-04-09 00:54:44.769675 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:54:44.769681 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:54:44.769687 | orchestrator | } 2026-04-09 00:54:44.769692 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:54:44.769698 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:54:44.769703 | orchestrator | } 2026-04-09 00:54:44.769708 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:54:44.769714 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:54:44.769719 | orchestrator | } 2026-04-09 00:54:44.769725 | orchestrator | 2026-04-09 00:54:44.769730 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:54:44.769736 | orchestrator | Thursday 09 April 2026 00:54:03 +0000 (0:00:00.370) 0:05:37.698 ******** 2026-04-09 00:54:44.769742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.769748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.769753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.769759 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.769765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.769774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.769785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.769791 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.769797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:54:44.769803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:54:44.769809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:54:44.769815 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.769820 | orchestrator | 2026-04-09 00:54:44.769826 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-09 00:54:44.769831 | orchestrator | Thursday 09 April 2026 00:54:05 +0000 (0:00:01.487) 0:05:39.185 ******** 2026-04-09 00:54:44.769837 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.769842 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.769851 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.769856 | orchestrator | 2026-04-09 00:54:44.769861 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-09 00:54:44.769867 | orchestrator | Thursday 09 April 2026 00:54:05 +0000 (0:00:00.670) 0:05:39.855 ******** 2026-04-09 00:54:44.769873 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.769878 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.769884 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.769889 | orchestrator | 2026-04-09 00:54:44.769895 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-09 00:54:44.769900 | orchestrator | Thursday 09 April 2026 00:54:06 +0000 (0:00:00.361) 0:05:40.217 ******** 2026-04-09 00:54:44.769906 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.769911 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.769917 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.769922 | orchestrator | 2026-04-09 00:54:44.769928 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-09 00:54:44.769933 | orchestrator | Thursday 09 April 2026 00:54:07 +0000 (0:00:01.280) 0:05:41.497 ******** 2026-04-09 00:54:44.769939 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.769978 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.769984 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.769989 | orchestrator | 2026-04-09 00:54:44.769995 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-09 00:54:44.770000 | orchestrator | Thursday 09 April 2026 00:54:08 +0000 (0:00:00.894) 0:05:42.392 ******** 2026-04-09 00:54:44.770006 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.770031 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.770038 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.770043 | orchestrator | 2026-04-09 00:54:44.770049 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-09 00:54:44.770054 | orchestrator | Thursday 09 April 2026 00:54:09 +0000 (0:00:01.035) 0:05:43.427 ******** 2026-04-09 00:54:44.770060 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.770065 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.770070 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.770076 | orchestrator | 2026-04-09 00:54:44.770081 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-09 00:54:44.770087 | orchestrator | Thursday 09 April 2026 00:54:14 +0000 (0:00:04.633) 0:05:48.060 ******** 2026-04-09 00:54:44.770092 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.770099 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.770112 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.770118 | orchestrator | 2026-04-09 00:54:44.770124 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-09 00:54:44.770132 | orchestrator | Thursday 09 April 2026 00:54:17 +0000 (0:00:02.978) 0:05:51.038 ******** 2026-04-09 00:54:44.770138 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.770143 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.770149 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.770157 | orchestrator | 2026-04-09 00:54:44.770165 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-09 00:54:44.770173 | orchestrator | Thursday 09 April 2026 00:54:25 +0000 (0:00:08.536) 0:05:59.574 ******** 2026-04-09 00:54:44.770181 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.770189 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.770196 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.770204 | orchestrator | 2026-04-09 00:54:44.770212 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-09 00:54:44.770219 | orchestrator | Thursday 09 April 2026 00:54:29 +0000 (0:00:03.750) 0:06:03.324 ******** 2026-04-09 00:54:44.770227 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:44.770234 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:44.770242 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:44.770249 | orchestrator | 2026-04-09 00:54:44.770257 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-09 00:54:44.770271 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:07.951) 0:06:11.276 ******** 2026-04-09 00:54:44.770280 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.770288 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.770296 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.770305 | orchestrator | 2026-04-09 00:54:44.770313 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-09 00:54:44.770321 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.638) 0:06:11.914 ******** 2026-04-09 00:54:44.770329 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.770335 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.770340 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.770345 | orchestrator | 2026-04-09 00:54:44.770350 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-09 00:54:44.770355 | orchestrator | Thursday 09 April 2026 00:54:38 +0000 (0:00:00.396) 0:06:12.311 ******** 2026-04-09 00:54:44.770360 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.770365 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.770370 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.770374 | orchestrator | 2026-04-09 00:54:44.770379 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-09 00:54:44.770384 | orchestrator | Thursday 09 April 2026 00:54:38 +0000 (0:00:00.358) 0:06:12.669 ******** 2026-04-09 00:54:44.770389 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.770394 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.770399 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.770404 | orchestrator | 2026-04-09 00:54:44.770408 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-09 00:54:44.770413 | orchestrator | Thursday 09 April 2026 00:54:39 +0000 (0:00:00.348) 0:06:13.017 ******** 2026-04-09 00:54:44.770418 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.770423 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.770428 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.770433 | orchestrator | 2026-04-09 00:54:44.770437 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-09 00:54:44.770442 | orchestrator | Thursday 09 April 2026 00:54:39 +0000 (0:00:00.674) 0:06:13.692 ******** 2026-04-09 00:54:44.770447 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:44.770452 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:44.770457 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:44.770462 | orchestrator | 2026-04-09 00:54:44.770467 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-09 00:54:44.770472 | orchestrator | Thursday 09 April 2026 00:54:40 +0000 (0:00:00.353) 0:06:14.046 ******** 2026-04-09 00:54:44.770477 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.770482 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.770487 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.770491 | orchestrator | 2026-04-09 00:54:44.770496 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-09 00:54:44.770501 | orchestrator | Thursday 09 April 2026 00:54:41 +0000 (0:00:00.943) 0:06:14.989 ******** 2026-04-09 00:54:44.770506 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:44.770511 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:44.770516 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:44.770520 | orchestrator | 2026-04-09 00:54:44.770525 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:54:44.770530 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-09 00:54:44.770536 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-09 00:54:44.770541 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-09 00:54:44.770549 | orchestrator | 2026-04-09 00:54:44.770553 | orchestrator | 2026-04-09 00:54:44.770558 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:54:44.770563 | orchestrator | Thursday 09 April 2026 00:54:41 +0000 (0:00:00.879) 0:06:15.868 ******** 2026-04-09 00:54:44.770568 | orchestrator | =============================================================================== 2026-04-09 00:54:44.770573 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.54s 2026-04-09 00:54:44.770578 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.95s 2026-04-09 00:54:44.770583 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.11s 2026-04-09 00:54:44.770591 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.09s 2026-04-09 00:54:44.770596 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.92s 2026-04-09 00:54:44.770604 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 5.26s 2026-04-09 00:54:44.770609 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.77s 2026-04-09 00:54:44.770614 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.73s 2026-04-09 00:54:44.770618 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.66s 2026-04-09 00:54:44.770623 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.63s 2026-04-09 00:54:44.770628 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.51s 2026-04-09 00:54:44.770633 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.50s 2026-04-09 00:54:44.770638 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.45s 2026-04-09 00:54:44.770642 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.39s 2026-04-09 00:54:44.770647 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.35s 2026-04-09 00:54:44.770652 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.23s 2026-04-09 00:54:44.770657 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.22s 2026-04-09 00:54:44.770661 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.05s 2026-04-09 00:54:44.770666 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.96s 2026-04-09 00:54:44.770671 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.90s 2026-04-09 00:54:47.785297 | orchestrator | 2026-04-09 00:54:47 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:47.786759 | orchestrator | 2026-04-09 00:54:47 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:54:47.788523 | orchestrator | 2026-04-09 00:54:47 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:54:47.788573 | orchestrator | 2026-04-09 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:50.822208 | orchestrator | 2026-04-09 00:54:50 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:50.824124 | orchestrator | 2026-04-09 00:54:50 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:54:50.826173 | orchestrator | 2026-04-09 00:54:50 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:54:50.826489 | orchestrator | 2026-04-09 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:53.860555 | orchestrator | 2026-04-09 00:54:53 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:53.860996 | orchestrator | 2026-04-09 00:54:53 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:54:53.862548 | orchestrator | 2026-04-09 00:54:53 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:54:53.862577 | orchestrator | 2026-04-09 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:56.903254 | orchestrator | 2026-04-09 00:54:56 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:56.904062 | orchestrator | 2026-04-09 00:54:56 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:54:56.905135 | orchestrator | 2026-04-09 00:54:56 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:54:56.905170 | orchestrator | 2026-04-09 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:59.934988 | orchestrator | 2026-04-09 00:54:59 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:54:59.935089 | orchestrator | 2026-04-09 00:54:59 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:54:59.936458 | orchestrator | 2026-04-09 00:54:59 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:54:59.936510 | orchestrator | 2026-04-09 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:02.976709 | orchestrator | 2026-04-09 00:55:02 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:02.977066 | orchestrator | 2026-04-09 00:55:02 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:02.978069 | orchestrator | 2026-04-09 00:55:02 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:02.978110 | orchestrator | 2026-04-09 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:06.021005 | orchestrator | 2026-04-09 00:55:06 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:06.021417 | orchestrator | 2026-04-09 00:55:06 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:06.022497 | orchestrator | 2026-04-09 00:55:06 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:06.022549 | orchestrator | 2026-04-09 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:09.063711 | orchestrator | 2026-04-09 00:55:09 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:09.063861 | orchestrator | 2026-04-09 00:55:09 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:09.064518 | orchestrator | 2026-04-09 00:55:09 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:09.064586 | orchestrator | 2026-04-09 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:12.094832 | orchestrator | 2026-04-09 00:55:12 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:12.096370 | orchestrator | 2026-04-09 00:55:12 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:12.097073 | orchestrator | 2026-04-09 00:55:12 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:12.097415 | orchestrator | 2026-04-09 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:15.144949 | orchestrator | 2026-04-09 00:55:15 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:15.145306 | orchestrator | 2026-04-09 00:55:15 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:15.147468 | orchestrator | 2026-04-09 00:55:15 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:15.147533 | orchestrator | 2026-04-09 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:18.185555 | orchestrator | 2026-04-09 00:55:18 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:18.186152 | orchestrator | 2026-04-09 00:55:18 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:18.186880 | orchestrator | 2026-04-09 00:55:18 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:18.186996 | orchestrator | 2026-04-09 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:21.237020 | orchestrator | 2026-04-09 00:55:21 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:21.239491 | orchestrator | 2026-04-09 00:55:21 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:21.239537 | orchestrator | 2026-04-09 00:55:21 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:21.239544 | orchestrator | 2026-04-09 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:24.290424 | orchestrator | 2026-04-09 00:55:24 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:24.292120 | orchestrator | 2026-04-09 00:55:24 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:24.295890 | orchestrator | 2026-04-09 00:55:24 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:24.296013 | orchestrator | 2026-04-09 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:27.345151 | orchestrator | 2026-04-09 00:55:27 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:27.347388 | orchestrator | 2026-04-09 00:55:27 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:27.349681 | orchestrator | 2026-04-09 00:55:27 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:27.349815 | orchestrator | 2026-04-09 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:30.391049 | orchestrator | 2026-04-09 00:55:30 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:30.391687 | orchestrator | 2026-04-09 00:55:30 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:30.392574 | orchestrator | 2026-04-09 00:55:30 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:30.392602 | orchestrator | 2026-04-09 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:33.446572 | orchestrator | 2026-04-09 00:55:33 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:33.448830 | orchestrator | 2026-04-09 00:55:33 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:33.449729 | orchestrator | 2026-04-09 00:55:33 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:33.449772 | orchestrator | 2026-04-09 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:36.496227 | orchestrator | 2026-04-09 00:55:36 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:36.499077 | orchestrator | 2026-04-09 00:55:36 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:36.501546 | orchestrator | 2026-04-09 00:55:36 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:36.501603 | orchestrator | 2026-04-09 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:39.544710 | orchestrator | 2026-04-09 00:55:39 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state STARTED 2026-04-09 00:55:39.544761 | orchestrator | 2026-04-09 00:55:39 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:39.545392 | orchestrator | 2026-04-09 00:55:39 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:39.545458 | orchestrator | 2026-04-09 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:42.594862 | orchestrator | 2026-04-09 00:55:42 | INFO  | Task fed34117-3bfb-4c1f-8359-b41006fa8b9a is in state SUCCESS 2026-04-09 00:55:42.596536 | orchestrator | 2026-04-09 00:55:42.596581 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:55:42.596589 | orchestrator | 2.16.14 2026-04-09 00:55:42.596597 | orchestrator | 2026-04-09 00:55:42.596604 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-09 00:55:42.596613 | orchestrator | 2026-04-09 00:55:42.596621 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 00:55:42.596627 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:00.619) 0:00:00.619 ******** 2026-04-09 00:55:42.596635 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.596938 | orchestrator | 2026-04-09 00:55:42.596946 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 00:55:42.596953 | orchestrator | Thursday 09 April 2026 00:45:47 +0000 (0:00:01.026) 0:00:01.646 ******** 2026-04-09 00:55:42.596959 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.596984 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.596991 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.596998 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597004 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597010 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597016 | orchestrator | 2026-04-09 00:55:42.597022 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 00:55:42.597028 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:02.021) 0:00:03.667 ******** 2026-04-09 00:55:42.597035 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597042 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597048 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597054 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597061 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597067 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597074 | orchestrator | 2026-04-09 00:55:42.597080 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 00:55:42.597086 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:00.568) 0:00:04.236 ******** 2026-04-09 00:55:42.597093 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597100 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597107 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597115 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597121 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597128 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597135 | orchestrator | 2026-04-09 00:55:42.597141 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 00:55:42.597148 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:00.990) 0:00:05.226 ******** 2026-04-09 00:55:42.597154 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597161 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597168 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597175 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597182 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597189 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597196 | orchestrator | 2026-04-09 00:55:42.597217 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 00:55:42.597224 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:00.877) 0:00:06.104 ******** 2026-04-09 00:55:42.597231 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597382 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597387 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597391 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597396 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597400 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597405 | orchestrator | 2026-04-09 00:55:42.597409 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 00:55:42.597414 | orchestrator | Thursday 09 April 2026 00:45:53 +0000 (0:00:00.688) 0:00:06.792 ******** 2026-04-09 00:55:42.597419 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597423 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597427 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597431 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597436 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597440 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597445 | orchestrator | 2026-04-09 00:55:42.597449 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 00:55:42.597459 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:01.123) 0:00:07.915 ******** 2026-04-09 00:55:42.597464 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.597469 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.597474 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.597478 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.597482 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.597485 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.597489 | orchestrator | 2026-04-09 00:55:42.597493 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 00:55:42.597497 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:00.848) 0:00:08.764 ******** 2026-04-09 00:55:42.597501 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597504 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597508 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597512 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597516 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597519 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597523 | orchestrator | 2026-04-09 00:55:42.597527 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 00:55:42.597531 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:01.030) 0:00:09.794 ******** 2026-04-09 00:55:42.597535 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:55:42.597539 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:55:42.597543 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:55:42.597546 | orchestrator | 2026-04-09 00:55:42.597550 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 00:55:42.597554 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:00.583) 0:00:10.378 ******** 2026-04-09 00:55:42.597558 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597561 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597565 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597569 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597589 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597594 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597598 | orchestrator | 2026-04-09 00:55:42.597601 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 00:55:42.597605 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:01.584) 0:00:11.963 ******** 2026-04-09 00:55:42.597609 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:55:42.597613 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:55:42.597622 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:55:42.597626 | orchestrator | 2026-04-09 00:55:42.597630 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 00:55:42.597633 | orchestrator | Thursday 09 April 2026 00:46:00 +0000 (0:00:02.686) 0:00:14.649 ******** 2026-04-09 00:55:42.597637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:55:42.597641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:55:42.597645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:55:42.597649 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.597653 | orchestrator | 2026-04-09 00:55:42.597676 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 00:55:42.597685 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:00.350) 0:00:14.999 ******** 2026-04-09 00:55:42.597692 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597700 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597714 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.597720 | orchestrator | 2026-04-09 00:55:42.597727 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 00:55:42.597733 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:01.069) 0:00:16.069 ******** 2026-04-09 00:55:42.597742 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597750 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597761 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597769 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.597775 | orchestrator | 2026-04-09 00:55:42.597781 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 00:55:42.597788 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:00.180) 0:00:16.250 ******** 2026-04-09 00:55:42.597815 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 00:45:58.888947', 'end': '2026-04-09 00:45:58.978391', 'delta': '0:00:00.089444', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597831 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 00:45:59.510532', 'end': '2026-04-09 00:45:59.621674', 'delta': '0:00:00.111142', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597838 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 00:46:00.660868', 'end': '2026-04-09 00:46:00.753805', 'delta': '0:00:00.092937', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.597843 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.597846 | orchestrator | 2026-04-09 00:55:42.597850 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 00:55:42.597854 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:00.443) 0:00:16.693 ******** 2026-04-09 00:55:42.597858 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597862 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.597865 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.597915 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.597923 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.597932 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.597940 | orchestrator | 2026-04-09 00:55:42.597947 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 00:55:42.597953 | orchestrator | Thursday 09 April 2026 00:46:05 +0000 (0:00:02.318) 0:00:19.011 ******** 2026-04-09 00:55:42.597959 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.597965 | orchestrator | 2026-04-09 00:55:42.597971 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 00:55:42.597978 | orchestrator | Thursday 09 April 2026 00:46:06 +0000 (0:00:00.810) 0:00:19.822 ******** 2026-04-09 00:55:42.597984 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.597990 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.597997 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598003 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598009 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598052 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598058 | orchestrator | 2026-04-09 00:55:42.598065 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 00:55:42.598071 | orchestrator | Thursday 09 April 2026 00:46:07 +0000 (0:00:01.347) 0:00:21.169 ******** 2026-04-09 00:55:42.598078 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598084 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598091 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598108 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598114 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598120 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598126 | orchestrator | 2026-04-09 00:55:42.598131 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:55:42.598137 | orchestrator | Thursday 09 April 2026 00:46:09 +0000 (0:00:02.182) 0:00:23.353 ******** 2026-04-09 00:55:42.598142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598148 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598154 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598160 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598166 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598172 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598178 | orchestrator | 2026-04-09 00:55:42.598184 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 00:55:42.598190 | orchestrator | Thursday 09 April 2026 00:46:10 +0000 (0:00:00.862) 0:00:24.215 ******** 2026-04-09 00:55:42.598197 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598203 | orchestrator | 2026-04-09 00:55:42.598209 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 00:55:42.598215 | orchestrator | Thursday 09 April 2026 00:46:10 +0000 (0:00:00.097) 0:00:24.313 ******** 2026-04-09 00:55:42.598221 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598228 | orchestrator | 2026-04-09 00:55:42.598234 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:55:42.598240 | orchestrator | Thursday 09 April 2026 00:46:10 +0000 (0:00:00.289) 0:00:24.602 ******** 2026-04-09 00:55:42.598246 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598253 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598259 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598265 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598271 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598277 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598284 | orchestrator | 2026-04-09 00:55:42.598319 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 00:55:42.598327 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:01.069) 0:00:25.672 ******** 2026-04-09 00:55:42.598333 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598339 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598345 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598351 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598356 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598363 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598369 | orchestrator | 2026-04-09 00:55:42.598375 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 00:55:42.598381 | orchestrator | Thursday 09 April 2026 00:46:12 +0000 (0:00:01.085) 0:00:26.757 ******** 2026-04-09 00:55:42.598387 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598394 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598399 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598406 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598411 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598415 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598418 | orchestrator | 2026-04-09 00:55:42.598422 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 00:55:42.598426 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:00.760) 0:00:27.518 ******** 2026-04-09 00:55:42.598431 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598437 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598443 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598449 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598455 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598461 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598473 | orchestrator | 2026-04-09 00:55:42.598479 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 00:55:42.598484 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:00.867) 0:00:28.385 ******** 2026-04-09 00:55:42.598491 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598497 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598502 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598508 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598515 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598521 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598528 | orchestrator | 2026-04-09 00:55:42.598534 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 00:55:42.598540 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:00.688) 0:00:29.073 ******** 2026-04-09 00:55:42.598547 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598553 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598560 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598564 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598568 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598572 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598575 | orchestrator | 2026-04-09 00:55:42.598579 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 00:55:42.598583 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:01.114) 0:00:30.188 ******** 2026-04-09 00:55:42.598587 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.598591 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.598595 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.598598 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.598602 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.598606 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.598610 | orchestrator | 2026-04-09 00:55:42.598613 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 00:55:42.598617 | orchestrator | Thursday 09 April 2026 00:46:17 +0000 (0:00:00.662) 0:00:30.850 ******** 2026-04-09 00:55:42.598628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part1', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part14', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part15', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part16', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.598925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.598944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.598953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599127 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.599152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.599301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.599312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599326 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.599341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.599377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.599383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f', 'dm-uuid-LVM-nR58myQ6pK7CQaaoaqeUaTr2y04UWbY4rmwX38Fsdxa6f0tdDHKde9pIwH3mBu3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc', 'dm-uuid-LVM-inKPYMNJVzOEcfQ61vGzCEOAGy0y8MwHDdw1TsQPoBrMrQLR2EaxpS4lADMlmMXF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599403 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.599413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab', 'dm-uuid-LVM-Lo4LcmWfSy7gVLMDXOQe0r6XJWEZ5FSB3EMePpFYvvdguKgOr1hP2cnsNB4diqWS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295', 'dm-uuid-LVM-EC5U4dGvscytX2YiEPz751fODyiM5M72dFyO1tHVtyJ16NzQnwBHL7mR7Apxxh1s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.599587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c', 'dm-uuid-LVM-ATeWDLeRt2MqpUMSviKSYYqUAu28iXv2moIkCA8ri2lff0l9G9wTQ20ulwcOt3m7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b0xHt1-aLY6-yIBz-O5g5-Np9W-VHpg-RipOIZ', 'scsi-0QEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289', 'scsi-SQEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.599651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.599670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.599686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5qfd8o-m5s0-C2o8-KAuj-GJfR-md71-iQQ1hr', 'scsi-0QEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2', 'scsi-SQEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443', 'dm-uuid-LVM-Ctb2dXGixJo1dOG789QKtkmq0iEDri4EYqcM52u9gZ93MvFmZ4au4J2KtuUbJIHA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UdmBAE-npqq-tfss-7hRw-4N8m-BlAM-rR1vGg', 'scsi-0QEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299', 'scsi-SQEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TakQuR-BfQB-UvNn-2mEJ-PIfA-Aqz3-sZqwRb', 'scsi-0QEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb', 'scsi-SQEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d', 'scsi-SQEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965', 'scsi-SQEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600236 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.600290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600303 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.600307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:55:42.600366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFvPvZ-xma3-G6w8-CL9f-zhKx-TjJS-x3zHJD', 'scsi-0QEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2', 'scsi-SQEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2iUzY0-E3Ac-OC4P-7dkU-6l7s-3ZU1-p0bxHR', 'scsi-0QEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f', 'scsi-SQEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669', 'scsi-SQEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:55:42.600499 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.600504 | orchestrator | 2026-04-09 00:55:42.600509 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 00:55:42.600513 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:01.313) 0:00:32.164 ******** 2026-04-09 00:55:42.600517 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600815 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600836 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600847 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600854 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600928 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600938 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part1', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part14', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part15', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part16', 'scsi-SQEMU_QEMU_HARDDISK_b6aa33c0-a4a8-450a-bdfd-eaf334278fb9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600989 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.600995 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601001 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601008 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601019 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601029 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601036 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601085 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601092 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601100 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a63ed1c-9d65-47b3-a05d-98e5e45fbc34-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601112 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601119 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.601175 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601185 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601192 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601217 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601223 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601256 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601261 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601266 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a1bf46f-1fd6-404c-9afe-46a88b051d7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601277 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601281 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.601361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f', 'dm-uuid-LVM-nR58myQ6pK7CQaaoaqeUaTr2y04UWbY4rmwX38Fsdxa6f0tdDHKde9pIwH3mBu3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc', 'dm-uuid-LVM-inKPYMNJVzOEcfQ61vGzCEOAGy0y8MwHDdw1TsQPoBrMrQLR2EaxpS4lADMlmMXF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601384 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.601388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab', 'dm-uuid-LVM-Lo4LcmWfSy7gVLMDXOQe0r6XJWEZ5FSB3EMePpFYvvdguKgOr1hP2cnsNB4diqWS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601512 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295', 'dm-uuid-LVM-EC5U4dGvscytX2YiEPz751fODyiM5M72dFyO1tHVtyJ16NzQnwBHL7mR7Apxxh1s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b0xHt1-aLY6-yIBz-O5g5-Np9W-VHpg-RipOIZ', 'scsi-0QEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289', 'scsi-SQEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5qfd8o-m5s0-C2o8-KAuj-GJfR-md71-iQQ1hr', 'scsi-0QEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2', 'scsi-SQEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d', 'scsi-SQEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601573 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601598 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.601638 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601698 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UdmBAE-npqq-tfss-7hRw-4N8m-BlAM-rR1vGg', 'scsi-0QEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299', 'scsi-SQEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TakQuR-BfQB-UvNn-2mEJ-PIfA-Aqz3-sZqwRb', 'scsi-0QEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb', 'scsi-SQEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965', 'scsi-SQEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601716 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.601744 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c', 'dm-uuid-LVM-ATeWDLeRt2MqpUMSviKSYYqUAu28iXv2moIkCA8ri2lff0l9G9wTQ20ulwcOt3m7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601753 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443', 'dm-uuid-LVM-Ctb2dXGixJo1dOG789QKtkmq0iEDri4EYqcM52u9gZ93MvFmZ4au4J2KtuUbJIHA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601760 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601777 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601843 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601856 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFvPvZ-xma3-G6w8-CL9f-zhKx-TjJS-x3zHJD', 'scsi-0QEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2', 'scsi-SQEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2iUzY0-E3Ac-OC4P-7dkU-6l7s-3ZU1-p0bxHR', 'scsi-0QEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f', 'scsi-SQEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669', 'scsi-SQEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:55:42.601981 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.601986 | orchestrator | 2026-04-09 00:55:42.601990 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 00:55:42.601994 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:01.127) 0:00:33.292 ******** 2026-04-09 00:55:42.602060 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.602071 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.602079 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.602086 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.602092 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.602098 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.602105 | orchestrator | 2026-04-09 00:55:42.602111 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 00:55:42.602130 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:01.087) 0:00:34.379 ******** 2026-04-09 00:55:42.602137 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.602143 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.602149 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.602156 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.602162 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.602168 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.602175 | orchestrator | 2026-04-09 00:55:42.602180 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:55:42.602185 | orchestrator | Thursday 09 April 2026 00:46:21 +0000 (0:00:00.676) 0:00:35.056 ******** 2026-04-09 00:55:42.602191 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.602198 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.602204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.602211 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602217 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602223 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602227 | orchestrator | 2026-04-09 00:55:42.602231 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:55:42.602234 | orchestrator | Thursday 09 April 2026 00:46:22 +0000 (0:00:00.856) 0:00:35.912 ******** 2026-04-09 00:55:42.602238 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.602242 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.602246 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.602253 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602259 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602266 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602272 | orchestrator | 2026-04-09 00:55:42.602278 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:55:42.602282 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:01.195) 0:00:37.107 ******** 2026-04-09 00:55:42.602286 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.602289 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.602293 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.602297 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602300 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602304 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602308 | orchestrator | 2026-04-09 00:55:42.602312 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:55:42.602316 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:01.052) 0:00:38.160 ******** 2026-04-09 00:55:42.602319 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.602323 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.602329 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.602336 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602342 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602348 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602362 | orchestrator | 2026-04-09 00:55:42.602415 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 00:55:42.602421 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:01.018) 0:00:39.178 ******** 2026-04-09 00:55:42.602425 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:55:42.602429 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 00:55:42.602433 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-09 00:55:42.602437 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 00:55:42.602440 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-09 00:55:42.602444 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 00:55:42.602448 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-09 00:55:42.602452 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-09 00:55:42.602456 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 00:55:42.602463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 00:55:42.602467 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 00:55:42.602470 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 00:55:42.602474 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 00:55:42.602478 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 00:55:42.602482 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 00:55:42.602485 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 00:55:42.602489 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 00:55:42.602493 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 00:55:42.602538 | orchestrator | 2026-04-09 00:55:42.602545 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 00:55:42.602551 | orchestrator | Thursday 09 April 2026 00:46:30 +0000 (0:00:04.893) 0:00:44.072 ******** 2026-04-09 00:55:42.602557 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:55:42.602563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:55:42.602578 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:55:42.602582 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:55:42.602586 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:55:42.602589 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:55:42.602596 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.602602 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:55:42.602608 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:55:42.602615 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:55:42.602642 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.602647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:55:42.602651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:55:42.602655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:55:42.602659 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.602663 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:55:42.602666 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:55:42.602670 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:55:42.602674 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602677 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602681 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:55:42.602685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:55:42.602689 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:55:42.602693 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602701 | orchestrator | 2026-04-09 00:55:42.602705 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 00:55:42.602708 | orchestrator | Thursday 09 April 2026 00:46:31 +0000 (0:00:01.193) 0:00:45.266 ******** 2026-04-09 00:55:42.602712 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.602716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.602720 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.602724 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.602728 | orchestrator | 2026-04-09 00:55:42.602732 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:55:42.602737 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.957) 0:00:46.224 ******** 2026-04-09 00:55:42.602741 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602745 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602748 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602752 | orchestrator | 2026-04-09 00:55:42.602756 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:55:42.602760 | orchestrator | Thursday 09 April 2026 00:46:33 +0000 (0:00:00.606) 0:00:46.830 ******** 2026-04-09 00:55:42.602764 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602767 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602771 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602775 | orchestrator | 2026-04-09 00:55:42.602779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:55:42.602783 | orchestrator | Thursday 09 April 2026 00:46:33 +0000 (0:00:00.405) 0:00:47.235 ******** 2026-04-09 00:55:42.602786 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602790 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.602794 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.602798 | orchestrator | 2026-04-09 00:55:42.602802 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:55:42.602805 | orchestrator | Thursday 09 April 2026 00:46:33 +0000 (0:00:00.497) 0:00:47.732 ******** 2026-04-09 00:55:42.602809 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.602813 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.602817 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.602821 | orchestrator | 2026-04-09 00:55:42.602824 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:55:42.602828 | orchestrator | Thursday 09 April 2026 00:46:34 +0000 (0:00:00.683) 0:00:48.416 ******** 2026-04-09 00:55:42.602832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.602836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.602839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.602843 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602847 | orchestrator | 2026-04-09 00:55:42.602851 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:55:42.602857 | orchestrator | Thursday 09 April 2026 00:46:35 +0000 (0:00:00.457) 0:00:48.874 ******** 2026-04-09 00:55:42.602861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.602865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.602879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.602884 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602888 | orchestrator | 2026-04-09 00:55:42.602891 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:55:42.602895 | orchestrator | Thursday 09 April 2026 00:46:35 +0000 (0:00:00.405) 0:00:49.279 ******** 2026-04-09 00:55:42.602899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.602903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.602910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.602914 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.602917 | orchestrator | 2026-04-09 00:55:42.602921 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:55:42.602925 | orchestrator | Thursday 09 April 2026 00:46:35 +0000 (0:00:00.407) 0:00:49.687 ******** 2026-04-09 00:55:42.602929 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.602932 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.602936 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.602940 | orchestrator | 2026-04-09 00:55:42.602944 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:55:42.602947 | orchestrator | Thursday 09 April 2026 00:46:36 +0000 (0:00:00.341) 0:00:50.028 ******** 2026-04-09 00:55:42.602951 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:55:42.602955 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:55:42.602959 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:55:42.602962 | orchestrator | 2026-04-09 00:55:42.602980 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 00:55:42.602985 | orchestrator | Thursday 09 April 2026 00:46:37 +0000 (0:00:00.789) 0:00:50.818 ******** 2026-04-09 00:55:42.602988 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:55:42.602992 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:55:42.602996 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:55:42.603000 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 00:55:42.603004 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:55:42.603007 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:55:42.603011 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:55:42.603015 | orchestrator | 2026-04-09 00:55:42.603019 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 00:55:42.603022 | orchestrator | Thursday 09 April 2026 00:46:38 +0000 (0:00:01.283) 0:00:52.101 ******** 2026-04-09 00:55:42.603026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:55:42.603030 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:55:42.603034 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:55:42.603038 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 00:55:42.603041 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:55:42.603045 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:55:42.603049 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:55:42.603053 | orchestrator | 2026-04-09 00:55:42.603056 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:55:42.603060 | orchestrator | Thursday 09 April 2026 00:46:40 +0000 (0:00:01.715) 0:00:53.817 ******** 2026-04-09 00:55:42.603064 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.603068 | orchestrator | 2026-04-09 00:55:42.603072 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:55:42.603076 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:01.488) 0:00:55.306 ******** 2026-04-09 00:55:42.603080 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.603084 | orchestrator | 2026-04-09 00:55:42.603090 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:55:42.603094 | orchestrator | Thursday 09 April 2026 00:46:42 +0000 (0:00:01.064) 0:00:56.371 ******** 2026-04-09 00:55:42.603098 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603102 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603105 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.603109 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603113 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.603117 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.603121 | orchestrator | 2026-04-09 00:55:42.603124 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:55:42.603128 | orchestrator | Thursday 09 April 2026 00:46:43 +0000 (0:00:00.838) 0:00:57.210 ******** 2026-04-09 00:55:42.603132 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603136 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603142 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603145 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603149 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603153 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603157 | orchestrator | 2026-04-09 00:55:42.603161 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:55:42.603164 | orchestrator | Thursday 09 April 2026 00:46:44 +0000 (0:00:00.959) 0:00:58.169 ******** 2026-04-09 00:55:42.603168 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603172 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603175 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603179 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603184 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603188 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603193 | orchestrator | 2026-04-09 00:55:42.603199 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:55:42.603209 | orchestrator | Thursday 09 April 2026 00:46:45 +0000 (0:00:01.312) 0:00:59.481 ******** 2026-04-09 00:55:42.603216 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603222 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603228 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603234 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603240 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603246 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603253 | orchestrator | 2026-04-09 00:55:42.603259 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:55:42.603266 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:01.642) 0:01:01.124 ******** 2026-04-09 00:55:42.603273 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.603279 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.603286 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603293 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603299 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.603306 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603312 | orchestrator | 2026-04-09 00:55:42.603321 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:55:42.603351 | orchestrator | Thursday 09 April 2026 00:46:48 +0000 (0:00:01.097) 0:01:02.222 ******** 2026-04-09 00:55:42.603359 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603366 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603375 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603380 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603384 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603388 | orchestrator | 2026-04-09 00:55:42.603393 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:55:42.603397 | orchestrator | Thursday 09 April 2026 00:46:49 +0000 (0:00:00.762) 0:01:02.984 ******** 2026-04-09 00:55:42.603402 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603415 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603423 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603430 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603436 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603442 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603448 | orchestrator | 2026-04-09 00:55:42.603455 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:55:42.603461 | orchestrator | Thursday 09 April 2026 00:46:49 +0000 (0:00:00.664) 0:01:03.649 ******** 2026-04-09 00:55:42.603468 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.603475 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.603481 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.603488 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603495 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603499 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603506 | orchestrator | 2026-04-09 00:55:42.603515 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:55:42.603522 | orchestrator | Thursday 09 April 2026 00:46:51 +0000 (0:00:01.328) 0:01:04.977 ******** 2026-04-09 00:55:42.603528 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.603533 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.603539 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.603545 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603551 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603557 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603563 | orchestrator | 2026-04-09 00:55:42.603570 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:55:42.603576 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:01.710) 0:01:06.688 ******** 2026-04-09 00:55:42.603583 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603589 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603595 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603602 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603607 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603613 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603619 | orchestrator | 2026-04-09 00:55:42.603625 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:55:42.603632 | orchestrator | Thursday 09 April 2026 00:46:53 +0000 (0:00:00.696) 0:01:07.384 ******** 2026-04-09 00:55:42.603638 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.603644 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.603651 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.603657 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603664 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603670 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603676 | orchestrator | 2026-04-09 00:55:42.603686 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:55:42.603692 | orchestrator | Thursday 09 April 2026 00:46:54 +0000 (0:00:00.615) 0:01:08.000 ******** 2026-04-09 00:55:42.603698 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603704 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603710 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603716 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603722 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603728 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603735 | orchestrator | 2026-04-09 00:55:42.603741 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:55:42.603747 | orchestrator | Thursday 09 April 2026 00:46:54 +0000 (0:00:00.782) 0:01:08.782 ******** 2026-04-09 00:55:42.603753 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603759 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603769 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603775 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603780 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603786 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603799 | orchestrator | 2026-04-09 00:55:42.603805 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:55:42.603811 | orchestrator | Thursday 09 April 2026 00:46:55 +0000 (0:00:00.670) 0:01:09.453 ******** 2026-04-09 00:55:42.603817 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603823 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603829 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603836 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.603842 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.603848 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.603854 | orchestrator | 2026-04-09 00:55:42.603859 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:55:42.603865 | orchestrator | Thursday 09 April 2026 00:46:56 +0000 (0:00:00.893) 0:01:10.346 ******** 2026-04-09 00:55:42.603884 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603899 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603907 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603913 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603920 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603926 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603932 | orchestrator | 2026-04-09 00:55:42.603938 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:55:42.603944 | orchestrator | Thursday 09 April 2026 00:46:57 +0000 (0:00:00.861) 0:01:11.208 ******** 2026-04-09 00:55:42.603950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.603956 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.603962 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.603969 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.603975 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.603981 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.603986 | orchestrator | 2026-04-09 00:55:42.604020 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:55:42.604028 | orchestrator | Thursday 09 April 2026 00:46:58 +0000 (0:00:01.254) 0:01:12.463 ******** 2026-04-09 00:55:42.604034 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.604040 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.604046 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.604052 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.604057 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.604064 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.604070 | orchestrator | 2026-04-09 00:55:42.604077 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:55:42.604083 | orchestrator | Thursday 09 April 2026 00:46:59 +0000 (0:00:01.111) 0:01:13.574 ******** 2026-04-09 00:55:42.604089 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.604095 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.604101 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.604108 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.604113 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.604120 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.604126 | orchestrator | 2026-04-09 00:55:42.604132 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:55:42.604138 | orchestrator | Thursday 09 April 2026 00:47:01 +0000 (0:00:01.432) 0:01:15.007 ******** 2026-04-09 00:55:42.604143 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.604149 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.604154 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.604160 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.604166 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.604171 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.604178 | orchestrator | 2026-04-09 00:55:42.604184 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 00:55:42.604190 | orchestrator | Thursday 09 April 2026 00:47:02 +0000 (0:00:01.234) 0:01:16.241 ******** 2026-04-09 00:55:42.604203 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.604209 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.604216 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.604222 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.604228 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.604234 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.604241 | orchestrator | 2026-04-09 00:55:42.604247 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 00:55:42.604253 | orchestrator | Thursday 09 April 2026 00:47:04 +0000 (0:00:01.743) 0:01:17.985 ******** 2026-04-09 00:55:42.604260 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.604266 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.604272 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.604279 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.604284 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.604288 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.604292 | orchestrator | 2026-04-09 00:55:42.604295 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 00:55:42.604301 | orchestrator | Thursday 09 April 2026 00:47:06 +0000 (0:00:02.693) 0:01:20.679 ******** 2026-04-09 00:55:42.604308 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.604315 | orchestrator | 2026-04-09 00:55:42.604321 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 00:55:42.604327 | orchestrator | Thursday 09 April 2026 00:47:08 +0000 (0:00:01.210) 0:01:21.889 ******** 2026-04-09 00:55:42.604333 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.604339 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.604345 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.604351 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.604357 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.604363 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.604369 | orchestrator | 2026-04-09 00:55:42.604376 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 00:55:42.604382 | orchestrator | Thursday 09 April 2026 00:47:08 +0000 (0:00:00.581) 0:01:22.470 ******** 2026-04-09 00:55:42.604388 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.604405 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.604412 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.604418 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.604424 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.604431 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.604435 | orchestrator | 2026-04-09 00:55:42.604438 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 00:55:42.604442 | orchestrator | Thursday 09 April 2026 00:47:09 +0000 (0:00:00.809) 0:01:23.280 ******** 2026-04-09 00:55:42.604446 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:55:42.604450 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:55:42.604454 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:55:42.604457 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:55:42.604461 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:55:42.604465 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:55:42.604469 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:55:42.604473 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:55:42.604476 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:55:42.604484 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:55:42.604488 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:55:42.604511 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:55:42.604516 | orchestrator | 2026-04-09 00:55:42.604520 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 00:55:42.604523 | orchestrator | Thursday 09 April 2026 00:47:10 +0000 (0:00:01.331) 0:01:24.612 ******** 2026-04-09 00:55:42.604527 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.604531 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.604535 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.604538 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.604542 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.604546 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.604552 | orchestrator | 2026-04-09 00:55:42.604558 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 00:55:42.604564 | orchestrator | Thursday 09 April 2026 00:47:12 +0000 (0:00:01.206) 0:01:25.818 ******** 2026-04-09 00:55:42.604570 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.604576 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.604583 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.604589 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.604595 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.604601 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.604607 | orchestrator | 2026-04-09 00:55:42.604614 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 00:55:42.604620 | orchestrator | Thursday 09 April 2026 00:47:12 +0000 (0:00:00.731) 0:01:26.550 ******** 2026-04-09 00:55:42.604626 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.604632 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.604639 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.604646 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.604652 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.604659 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.604665 | orchestrator | 2026-04-09 00:55:42.604671 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 00:55:42.604677 | orchestrator | Thursday 09 April 2026 00:47:13 +0000 (0:00:00.837) 0:01:27.388 ******** 2026-04-09 00:55:42.604684 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.604690 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.604696 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.604702 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.604709 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.604715 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.604721 | orchestrator | 2026-04-09 00:55:42.604727 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 00:55:42.604733 | orchestrator | Thursday 09 April 2026 00:47:14 +0000 (0:00:00.569) 0:01:27.957 ******** 2026-04-09 00:55:42.604740 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.604746 | orchestrator | 2026-04-09 00:55:42.604752 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 00:55:42.604758 | orchestrator | Thursday 09 April 2026 00:47:15 +0000 (0:00:01.122) 0:01:29.080 ******** 2026-04-09 00:55:42.604764 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.604771 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.604777 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.604784 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.604790 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.604796 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.604803 | orchestrator | 2026-04-09 00:55:42.604809 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 00:55:42.604820 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:01:20.205) 0:02:49.285 ******** 2026-04-09 00:55:42.604826 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:55:42.604832 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:55:42.604838 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:55:42.604848 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.604855 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:55:42.604861 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:55:42.604867 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:55:42.604885 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.604891 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:55:42.604898 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:55:42.604902 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:55:42.604906 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.604909 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:55:42.604913 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:55:42.604917 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:55:42.604924 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.604930 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:55:42.604936 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:55:42.604941 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:55:42.604947 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:55:42.604954 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:55:42.604980 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:55:42.604987 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.604993 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.604999 | orchestrator | 2026-04-09 00:55:42.605006 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 00:55:42.605012 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:00.757) 0:02:50.043 ******** 2026-04-09 00:55:42.605018 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605024 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605030 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605037 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605043 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605049 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605055 | orchestrator | 2026-04-09 00:55:42.605061 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 00:55:42.605067 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:00.476) 0:02:50.519 ******** 2026-04-09 00:55:42.605074 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605080 | orchestrator | 2026-04-09 00:55:42.605086 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 00:55:42.605092 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:00.264) 0:02:50.783 ******** 2026-04-09 00:55:42.605098 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605104 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605110 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605116 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605127 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605133 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605139 | orchestrator | 2026-04-09 00:55:42.605145 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 00:55:42.605151 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:00.691) 0:02:51.475 ******** 2026-04-09 00:55:42.605157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605164 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605170 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605176 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605182 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605188 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605194 | orchestrator | 2026-04-09 00:55:42.605200 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 00:55:42.605207 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:01.037) 0:02:52.513 ******** 2026-04-09 00:55:42.605213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605219 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605225 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605231 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605237 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605244 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605250 | orchestrator | 2026-04-09 00:55:42.605256 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 00:55:42.605262 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:00.962) 0:02:53.475 ******** 2026-04-09 00:55:42.605268 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.605274 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.605280 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.605286 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.605292 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.605298 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.605304 | orchestrator | 2026-04-09 00:55:42.605310 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 00:55:42.605317 | orchestrator | Thursday 09 April 2026 00:48:42 +0000 (0:00:03.170) 0:02:56.646 ******** 2026-04-09 00:55:42.605323 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.605329 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.605335 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.605341 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.605347 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.605353 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.605359 | orchestrator | 2026-04-09 00:55:42.605365 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 00:55:42.605371 | orchestrator | Thursday 09 April 2026 00:48:43 +0000 (0:00:00.614) 0:02:57.261 ******** 2026-04-09 00:55:42.605381 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-2 2026-04-09 00:55:42.605388 | orchestrator | 2026-04-09 00:55:42.605394 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 00:55:42.605400 | orchestrator | Thursday 09 April 2026 00:48:44 +0000 (0:00:01.273) 0:02:58.534 ******** 2026-04-09 00:55:42.605407 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605413 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605420 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605426 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605432 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605438 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605444 | orchestrator | 2026-04-09 00:55:42.605450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 00:55:42.605457 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:00.554) 0:02:59.089 ******** 2026-04-09 00:55:42.605463 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605479 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605485 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605491 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605497 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605504 | orchestrator | 2026-04-09 00:55:42.605510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 00:55:42.605516 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:00.648) 0:02:59.738 ******** 2026-04-09 00:55:42.605523 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605529 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605535 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605541 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605547 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605570 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605578 | orchestrator | 2026-04-09 00:55:42.605584 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 00:55:42.605590 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:00.604) 0:03:00.343 ******** 2026-04-09 00:55:42.605597 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605609 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605614 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605620 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605626 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605632 | orchestrator | 2026-04-09 00:55:42.605638 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 00:55:42.605644 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.792) 0:03:01.135 ******** 2026-04-09 00:55:42.605650 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605657 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605663 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605669 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605675 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605682 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605688 | orchestrator | 2026-04-09 00:55:42.605694 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 00:55:42.605700 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.550) 0:03:01.685 ******** 2026-04-09 00:55:42.605706 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605712 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605719 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605725 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605731 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605737 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605743 | orchestrator | 2026-04-09 00:55:42.605749 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 00:55:42.605755 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:00.738) 0:03:02.424 ******** 2026-04-09 00:55:42.605761 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605768 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605774 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605780 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605786 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605792 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605798 | orchestrator | 2026-04-09 00:55:42.605804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 00:55:42.605810 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:00.588) 0:03:03.012 ******** 2026-04-09 00:55:42.605817 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.605823 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.605829 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.605835 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.605915 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.605924 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.605930 | orchestrator | 2026-04-09 00:55:42.605937 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 00:55:42.605943 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.775) 0:03:03.788 ******** 2026-04-09 00:55:42.605949 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.605955 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.605961 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.605968 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.605974 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.605980 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.605985 | orchestrator | 2026-04-09 00:55:42.605992 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 00:55:42.605998 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.930) 0:03:04.718 ******** 2026-04-09 00:55:42.606004 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.606011 | orchestrator | 2026-04-09 00:55:42.606042 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 00:55:42.606053 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:00.877) 0:03:05.596 ******** 2026-04-09 00:55:42.606060 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-09 00:55:42.606066 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-09 00:55:42.606072 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-09 00:55:42.606079 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-09 00:55:42.606085 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-09 00:55:42.606091 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-09 00:55:42.606097 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-09 00:55:42.606103 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-09 00:55:42.606109 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-09 00:55:42.606116 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-09 00:55:42.606122 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-09 00:55:42.606128 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-09 00:55:42.606134 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-09 00:55:42.606140 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-09 00:55:42.606146 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-09 00:55:42.606152 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-09 00:55:42.606158 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-09 00:55:42.606165 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-09 00:55:42.606170 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-09 00:55:42.606174 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-09 00:55:42.606194 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-09 00:55:42.606199 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-09 00:55:42.606203 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-09 00:55:42.606207 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-09 00:55:42.606210 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-09 00:55:42.606214 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-09 00:55:42.606218 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-09 00:55:42.606222 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-09 00:55:42.606225 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-09 00:55:42.606233 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-09 00:55:42.606236 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-09 00:55:42.606240 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-09 00:55:42.606244 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-09 00:55:42.606247 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-09 00:55:42.606251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-09 00:55:42.606255 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-09 00:55:42.606258 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-09 00:55:42.606262 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-09 00:55:42.606266 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-09 00:55:42.606271 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-09 00:55:42.606277 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-09 00:55:42.606284 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-09 00:55:42.606291 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:55:42.606297 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:55:42.606304 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:55:42.606310 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:55:42.606317 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:55:42.606323 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:55:42.606328 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:55:42.606331 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:55:42.606335 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:55:42.606339 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:55:42.606344 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:55:42.606350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:55:42.606356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:55:42.606362 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:55:42.606368 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:55:42.606373 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:55:42.606379 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:55:42.606385 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:55:42.606395 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:55:42.606401 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:55:42.606408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:55:42.606413 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:55:42.606417 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:55:42.606421 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:55:42.606426 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:55:42.606433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:55:42.606438 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:55:42.606445 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:55:42.606455 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:55:42.606461 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:55:42.606467 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:55:42.606473 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:55:42.606478 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:55:42.606484 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:55:42.606490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:55:42.606497 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:55:42.606524 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:55:42.606532 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:55:42.606538 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:55:42.606544 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:55:42.606550 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:55:42.606557 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:55:42.606563 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-09 00:55:42.606569 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-09 00:55:42.606575 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-09 00:55:42.606582 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-09 00:55:42.606588 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-09 00:55:42.606595 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-09 00:55:42.606601 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-09 00:55:42.606607 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-09 00:55:42.606613 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-09 00:55:42.606619 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-09 00:55:42.606626 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-09 00:55:42.606632 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-09 00:55:42.606639 | orchestrator | 2026-04-09 00:55:42.606645 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 00:55:42.606651 | orchestrator | Thursday 09 April 2026 00:48:58 +0000 (0:00:06.791) 0:03:12.388 ******** 2026-04-09 00:55:42.606657 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.606663 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.606670 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.606676 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.606683 | orchestrator | 2026-04-09 00:55:42.606689 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 00:55:42.606695 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:00.834) 0:03:13.223 ******** 2026-04-09 00:55:42.606701 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.606708 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.606715 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.606721 | orchestrator | 2026-04-09 00:55:42.606728 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 00:55:42.606739 | orchestrator | Thursday 09 April 2026 00:49:00 +0000 (0:00:00.736) 0:03:13.959 ******** 2026-04-09 00:55:42.606745 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.606751 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.606757 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.606763 | orchestrator | 2026-04-09 00:55:42.606775 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 00:55:42.606782 | orchestrator | Thursday 09 April 2026 00:49:01 +0000 (0:00:01.341) 0:03:15.301 ******** 2026-04-09 00:55:42.606788 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.606794 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.606800 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.606805 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.606811 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.606817 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.606822 | orchestrator | 2026-04-09 00:55:42.606827 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 00:55:42.606834 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:00.529) 0:03:15.831 ******** 2026-04-09 00:55:42.606840 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.606846 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.606853 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.606860 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.606866 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.606883 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.606890 | orchestrator | 2026-04-09 00:55:42.606897 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 00:55:42.606903 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:00.700) 0:03:16.531 ******** 2026-04-09 00:55:42.606909 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.606915 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.606921 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.606927 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.606933 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.606940 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.606946 | orchestrator | 2026-04-09 00:55:42.606952 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 00:55:42.606958 | orchestrator | Thursday 09 April 2026 00:49:03 +0000 (0:00:00.549) 0:03:17.081 ******** 2026-04-09 00:55:42.606986 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.606994 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607000 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607007 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607013 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607019 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607025 | orchestrator | 2026-04-09 00:55:42.607030 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 00:55:42.607036 | orchestrator | Thursday 09 April 2026 00:49:03 +0000 (0:00:00.585) 0:03:17.666 ******** 2026-04-09 00:55:42.607041 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607046 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607052 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607058 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607065 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607071 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607077 | orchestrator | 2026-04-09 00:55:42.607084 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 00:55:42.607090 | orchestrator | Thursday 09 April 2026 00:49:04 +0000 (0:00:00.499) 0:03:18.166 ******** 2026-04-09 00:55:42.607101 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607107 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607114 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607120 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607126 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607132 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607138 | orchestrator | 2026-04-09 00:55:42.607144 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 00:55:42.607150 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:00.658) 0:03:18.825 ******** 2026-04-09 00:55:42.607157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607163 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607169 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607175 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607181 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607187 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607193 | orchestrator | 2026-04-09 00:55:42.607200 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 00:55:42.607206 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:00.512) 0:03:19.337 ******** 2026-04-09 00:55:42.607212 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607218 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607225 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607231 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607237 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607243 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607249 | orchestrator | 2026-04-09 00:55:42.607255 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 00:55:42.607261 | orchestrator | Thursday 09 April 2026 00:49:06 +0000 (0:00:00.522) 0:03:19.860 ******** 2026-04-09 00:55:42.607267 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607273 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607279 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607285 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.607291 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.607298 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.607304 | orchestrator | 2026-04-09 00:55:42.607310 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 00:55:42.607316 | orchestrator | Thursday 09 April 2026 00:49:07 +0000 (0:00:01.768) 0:03:21.628 ******** 2026-04-09 00:55:42.607322 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607328 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607334 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607340 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.607346 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.607352 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.607358 | orchestrator | 2026-04-09 00:55:42.607364 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 00:55:42.607381 | orchestrator | Thursday 09 April 2026 00:49:08 +0000 (0:00:00.608) 0:03:22.237 ******** 2026-04-09 00:55:42.607391 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607397 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607403 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607410 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.607416 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.607422 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.607428 | orchestrator | 2026-04-09 00:55:42.607434 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 00:55:42.607440 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:00.716) 0:03:22.953 ******** 2026-04-09 00:55:42.607447 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607463 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607469 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607475 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607482 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607488 | orchestrator | 2026-04-09 00:55:42.607494 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 00:55:42.607500 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:00.508) 0:03:23.462 ******** 2026-04-09 00:55:42.607506 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607513 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607519 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607525 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.607531 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.607537 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.607544 | orchestrator | 2026-04-09 00:55:42.607569 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 00:55:42.607576 | orchestrator | Thursday 09 April 2026 00:49:10 +0000 (0:00:00.758) 0:03:24.220 ******** 2026-04-09 00:55:42.607582 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607588 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607594 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607602 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-09 00:55:42.607609 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-09 00:55:42.607616 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-09 00:55:42.607622 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-09 00:55:42.607629 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607635 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607641 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-09 00:55:42.607648 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-09 00:55:42.607654 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607660 | orchestrator | 2026-04-09 00:55:42.607666 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 00:55:42.607676 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:00.619) 0:03:24.840 ******** 2026-04-09 00:55:42.607683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607689 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607695 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607701 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607707 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607714 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607720 | orchestrator | 2026-04-09 00:55:42.607726 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 00:55:42.607735 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:00.640) 0:03:25.481 ******** 2026-04-09 00:55:42.607741 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607747 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607753 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607759 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607766 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607772 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607778 | orchestrator | 2026-04-09 00:55:42.607784 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:55:42.607791 | orchestrator | Thursday 09 April 2026 00:49:12 +0000 (0:00:00.491) 0:03:25.972 ******** 2026-04-09 00:55:42.607797 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607803 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607809 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607815 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607821 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607827 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607833 | orchestrator | 2026-04-09 00:55:42.607840 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:55:42.607846 | orchestrator | Thursday 09 April 2026 00:49:12 +0000 (0:00:00.717) 0:03:26.690 ******** 2026-04-09 00:55:42.607852 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607858 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607864 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607902 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607909 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607915 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607921 | orchestrator | 2026-04-09 00:55:42.607927 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:55:42.607934 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:00.527) 0:03:27.218 ******** 2026-04-09 00:55:42.607940 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.607965 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.607972 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.607979 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.607985 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.607991 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.607997 | orchestrator | 2026-04-09 00:55:42.608003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:55:42.608009 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.656) 0:03:27.874 ******** 2026-04-09 00:55:42.608016 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608022 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.608028 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.608034 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.608040 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.608046 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.608053 | orchestrator | 2026-04-09 00:55:42.608059 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:55:42.608065 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.678) 0:03:28.553 ******** 2026-04-09 00:55:42.608076 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:55:42.608083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:55:42.608090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:55:42.608096 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608102 | orchestrator | 2026-04-09 00:55:42.608108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:55:42.608114 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.511) 0:03:29.065 ******** 2026-04-09 00:55:42.608121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:55:42.608127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:55:42.608133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:55:42.608139 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608145 | orchestrator | 2026-04-09 00:55:42.608151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:55:42.608157 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.643) 0:03:29.708 ******** 2026-04-09 00:55:42.608163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:55:42.608169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:55:42.608175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:55:42.608181 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608188 | orchestrator | 2026-04-09 00:55:42.608194 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:55:42.608200 | orchestrator | Thursday 09 April 2026 00:49:16 +0000 (0:00:00.357) 0:03:30.066 ******** 2026-04-09 00:55:42.608206 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608212 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.608218 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.608224 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.608230 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.608236 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.608243 | orchestrator | 2026-04-09 00:55:42.608249 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:55:42.608255 | orchestrator | Thursday 09 April 2026 00:49:16 +0000 (0:00:00.544) 0:03:30.610 ******** 2026-04-09 00:55:42.608262 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-09 00:55:42.608267 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608271 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-09 00:55:42.608275 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.608279 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-09 00:55:42.608282 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.608286 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:55:42.608290 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:55:42.608294 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:55:42.608298 | orchestrator | 2026-04-09 00:55:42.608301 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 00:55:42.608311 | orchestrator | Thursday 09 April 2026 00:49:18 +0000 (0:00:01.951) 0:03:32.562 ******** 2026-04-09 00:55:42.608317 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.608323 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.608328 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.608334 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.608340 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.608347 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.608351 | orchestrator | 2026-04-09 00:55:42.608355 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:55:42.608358 | orchestrator | Thursday 09 April 2026 00:49:21 +0000 (0:00:02.491) 0:03:35.053 ******** 2026-04-09 00:55:42.608363 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.608369 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.608380 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.608385 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.608389 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.608393 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.608396 | orchestrator | 2026-04-09 00:55:42.608400 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 00:55:42.608404 | orchestrator | Thursday 09 April 2026 00:49:22 +0000 (0:00:01.051) 0:03:36.105 ******** 2026-04-09 00:55:42.608408 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608411 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.608415 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.608419 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-09 00:55:42.608423 | orchestrator | 2026-04-09 00:55:42.608427 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 00:55:42.608430 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:01.224) 0:03:37.329 ******** 2026-04-09 00:55:42.608434 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.608438 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.608442 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.608461 | orchestrator | 2026-04-09 00:55:42.608465 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 00:55:42.608469 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:00.334) 0:03:37.663 ******** 2026-04-09 00:55:42.608473 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.608477 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.608481 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.608484 | orchestrator | 2026-04-09 00:55:42.608488 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 00:55:42.608492 | orchestrator | Thursday 09 April 2026 00:49:25 +0000 (0:00:01.166) 0:03:38.830 ******** 2026-04-09 00:55:42.608496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:55:42.608500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:55:42.608503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:55:42.608507 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608511 | orchestrator | 2026-04-09 00:55:42.608515 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 00:55:42.608518 | orchestrator | Thursday 09 April 2026 00:49:25 +0000 (0:00:00.804) 0:03:39.635 ******** 2026-04-09 00:55:42.608522 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.608526 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.608530 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.608533 | orchestrator | 2026-04-09 00:55:42.608537 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 00:55:42.608541 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.573) 0:03:40.208 ******** 2026-04-09 00:55:42.608545 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608549 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.608552 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.608556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.608560 | orchestrator | 2026-04-09 00:55:42.608564 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 00:55:42.608568 | orchestrator | Thursday 09 April 2026 00:49:27 +0000 (0:00:00.777) 0:03:40.986 ******** 2026-04-09 00:55:42.608572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.608575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.608579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.608583 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608587 | orchestrator | 2026-04-09 00:55:42.608590 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 00:55:42.608597 | orchestrator | Thursday 09 April 2026 00:49:27 +0000 (0:00:00.608) 0:03:41.595 ******** 2026-04-09 00:55:42.608601 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608605 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.608608 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.608612 | orchestrator | 2026-04-09 00:55:42.608616 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 00:55:42.608620 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.593) 0:03:42.188 ******** 2026-04-09 00:55:42.608623 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608627 | orchestrator | 2026-04-09 00:55:42.608631 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 00:55:42.608635 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.247) 0:03:42.436 ******** 2026-04-09 00:55:42.608638 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608642 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.608646 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.608650 | orchestrator | 2026-04-09 00:55:42.608653 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 00:55:42.608657 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.329) 0:03:42.765 ******** 2026-04-09 00:55:42.608661 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608665 | orchestrator | 2026-04-09 00:55:42.608668 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 00:55:42.608674 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:00.199) 0:03:42.964 ******** 2026-04-09 00:55:42.608678 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608682 | orchestrator | 2026-04-09 00:55:42.608686 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 00:55:42.608690 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:00.240) 0:03:43.205 ******** 2026-04-09 00:55:42.608694 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608700 | orchestrator | 2026-04-09 00:55:42.608706 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 00:55:42.608717 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:00.117) 0:03:43.322 ******** 2026-04-09 00:55:42.608724 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608730 | orchestrator | 2026-04-09 00:55:42.608735 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 00:55:42.608742 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:00.193) 0:03:43.515 ******** 2026-04-09 00:55:42.608748 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608755 | orchestrator | 2026-04-09 00:55:42.608761 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 00:55:42.608767 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:00.221) 0:03:43.736 ******** 2026-04-09 00:55:42.608773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.608780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.608786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.608793 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608799 | orchestrator | 2026-04-09 00:55:42.608806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 00:55:42.608822 | orchestrator | Thursday 09 April 2026 00:49:30 +0000 (0:00:00.681) 0:03:44.418 ******** 2026-04-09 00:55:42.608829 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608854 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.608862 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.608868 | orchestrator | 2026-04-09 00:55:42.608889 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 00:55:42.608895 | orchestrator | Thursday 09 April 2026 00:49:31 +0000 (0:00:00.528) 0:03:44.947 ******** 2026-04-09 00:55:42.608901 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608907 | orchestrator | 2026-04-09 00:55:42.608919 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 00:55:42.608926 | orchestrator | Thursday 09 April 2026 00:49:31 +0000 (0:00:00.227) 0:03:45.174 ******** 2026-04-09 00:55:42.608931 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.608937 | orchestrator | 2026-04-09 00:55:42.608943 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 00:55:42.608949 | orchestrator | Thursday 09 April 2026 00:49:31 +0000 (0:00:00.245) 0:03:45.420 ******** 2026-04-09 00:55:42.608955 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.608961 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.608967 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.608972 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.608979 | orchestrator | 2026-04-09 00:55:42.608984 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 00:55:42.608990 | orchestrator | Thursday 09 April 2026 00:49:32 +0000 (0:00:01.001) 0:03:46.421 ******** 2026-04-09 00:55:42.608996 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.609002 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.609008 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.609013 | orchestrator | 2026-04-09 00:55:42.609019 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 00:55:42.609025 | orchestrator | Thursday 09 April 2026 00:49:32 +0000 (0:00:00.320) 0:03:46.741 ******** 2026-04-09 00:55:42.609030 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.609036 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.609042 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.609048 | orchestrator | 2026-04-09 00:55:42.609054 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 00:55:42.609061 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:01.165) 0:03:47.907 ******** 2026-04-09 00:55:42.609067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.609073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.609079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.609085 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.609091 | orchestrator | 2026-04-09 00:55:42.609097 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 00:55:42.609103 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:00.859) 0:03:48.766 ******** 2026-04-09 00:55:42.609110 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.609116 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.609122 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.609128 | orchestrator | 2026-04-09 00:55:42.609135 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 00:55:42.609141 | orchestrator | Thursday 09 April 2026 00:49:35 +0000 (0:00:00.323) 0:03:49.090 ******** 2026-04-09 00:55:42.609147 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609153 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609159 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.609171 | orchestrator | 2026-04-09 00:55:42.609177 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 00:55:42.609183 | orchestrator | Thursday 09 April 2026 00:49:36 +0000 (0:00:00.998) 0:03:50.088 ******** 2026-04-09 00:55:42.609189 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.609196 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.609201 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.609207 | orchestrator | 2026-04-09 00:55:42.609214 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 00:55:42.609223 | orchestrator | Thursday 09 April 2026 00:49:36 +0000 (0:00:00.305) 0:03:50.394 ******** 2026-04-09 00:55:42.609230 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.609241 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.609247 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.609253 | orchestrator | 2026-04-09 00:55:42.609259 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 00:55:42.609265 | orchestrator | Thursday 09 April 2026 00:49:38 +0000 (0:00:01.487) 0:03:51.882 ******** 2026-04-09 00:55:42.609271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.609276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.609282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.609289 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.609295 | orchestrator | 2026-04-09 00:55:42.609301 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 00:55:42.609307 | orchestrator | Thursday 09 April 2026 00:49:38 +0000 (0:00:00.712) 0:03:52.594 ******** 2026-04-09 00:55:42.609313 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.609319 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.609325 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.609331 | orchestrator | 2026-04-09 00:55:42.609337 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-09 00:55:42.609343 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:00.326) 0:03:52.921 ******** 2026-04-09 00:55:42.609350 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609356 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609362 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609369 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.609375 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.609381 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.609387 | orchestrator | 2026-04-09 00:55:42.609419 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 00:55:42.609426 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:00.741) 0:03:53.662 ******** 2026-04-09 00:55:42.609433 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.609439 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.609445 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.609451 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.609458 | orchestrator | 2026-04-09 00:55:42.609464 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 00:55:42.609470 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:01.159) 0:03:54.822 ******** 2026-04-09 00:55:42.609476 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.609483 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.609489 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.609495 | orchestrator | 2026-04-09 00:55:42.609501 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 00:55:42.609508 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:00.368) 0:03:55.191 ******** 2026-04-09 00:55:42.609512 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.609516 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.609520 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.609523 | orchestrator | 2026-04-09 00:55:42.609527 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 00:55:42.609531 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:01.460) 0:03:56.652 ******** 2026-04-09 00:55:42.609535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:55:42.609538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:55:42.609542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:55:42.609546 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609550 | orchestrator | 2026-04-09 00:55:42.609554 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 00:55:42.609557 | orchestrator | Thursday 09 April 2026 00:49:43 +0000 (0:00:00.554) 0:03:57.206 ******** 2026-04-09 00:55:42.609565 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.609569 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.609573 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.609576 | orchestrator | 2026-04-09 00:55:42.609580 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-09 00:55:42.609584 | orchestrator | 2026-04-09 00:55:42.609588 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:55:42.609592 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.708) 0:03:57.915 ******** 2026-04-09 00:55:42.609596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.609600 | orchestrator | 2026-04-09 00:55:42.609603 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:55:42.609607 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.503) 0:03:58.418 ******** 2026-04-09 00:55:42.609611 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.609615 | orchestrator | 2026-04-09 00:55:42.609618 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:55:42.609622 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.387) 0:03:58.806 ******** 2026-04-09 00:55:42.609626 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.609630 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.609634 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.609637 | orchestrator | 2026-04-09 00:55:42.609641 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:55:42.609645 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.712) 0:03:59.519 ******** 2026-04-09 00:55:42.609649 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609652 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609656 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609660 | orchestrator | 2026-04-09 00:55:42.609668 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:55:42.609672 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:00.461) 0:03:59.980 ******** 2026-04-09 00:55:42.609676 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609680 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609684 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609688 | orchestrator | 2026-04-09 00:55:42.609691 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:55:42.609695 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:00.291) 0:04:00.271 ******** 2026-04-09 00:55:42.609699 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609703 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609706 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609710 | orchestrator | 2026-04-09 00:55:42.609714 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:55:42.609718 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:00.306) 0:04:00.578 ******** 2026-04-09 00:55:42.609721 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.609725 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.609729 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.609733 | orchestrator | 2026-04-09 00:55:42.609737 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:55:42.609740 | orchestrator | Thursday 09 April 2026 00:49:47 +0000 (0:00:00.682) 0:04:01.260 ******** 2026-04-09 00:55:42.609744 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609748 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609752 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609755 | orchestrator | 2026-04-09 00:55:42.609759 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:55:42.609763 | orchestrator | Thursday 09 April 2026 00:49:47 +0000 (0:00:00.410) 0:04:01.670 ******** 2026-04-09 00:55:42.609770 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609774 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609778 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609782 | orchestrator | 2026-04-09 00:55:42.609800 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:55:42.609805 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.270) 0:04:01.941 ******** 2026-04-09 00:55:42.609809 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.609812 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.609816 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.609820 | orchestrator | 2026-04-09 00:55:42.609824 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:55:42.609828 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.612) 0:04:02.554 ******** 2026-04-09 00:55:42.609831 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.609835 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.609839 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.609843 | orchestrator | 2026-04-09 00:55:42.609846 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:55:42.609850 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:00.741) 0:04:03.295 ******** 2026-04-09 00:55:42.609854 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609858 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609862 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609865 | orchestrator | 2026-04-09 00:55:42.609882 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:55:42.609890 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:00.460) 0:04:03.756 ******** 2026-04-09 00:55:42.609896 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.609902 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.609909 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.609914 | orchestrator | 2026-04-09 00:55:42.609920 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:55:42.609926 | orchestrator | Thursday 09 April 2026 00:49:50 +0000 (0:00:00.362) 0:04:04.118 ******** 2026-04-09 00:55:42.609932 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609938 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609945 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609951 | orchestrator | 2026-04-09 00:55:42.609957 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:55:42.609964 | orchestrator | Thursday 09 April 2026 00:49:50 +0000 (0:00:00.297) 0:04:04.415 ******** 2026-04-09 00:55:42.609969 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609973 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.609977 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.609981 | orchestrator | 2026-04-09 00:55:42.609984 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:55:42.609988 | orchestrator | Thursday 09 April 2026 00:49:50 +0000 (0:00:00.303) 0:04:04.719 ******** 2026-04-09 00:55:42.609993 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.609999 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.610005 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.610033 | orchestrator | 2026-04-09 00:55:42.610042 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:55:42.610049 | orchestrator | Thursday 09 April 2026 00:49:51 +0000 (0:00:00.387) 0:04:05.106 ******** 2026-04-09 00:55:42.610056 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.610060 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.610064 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.610070 | orchestrator | 2026-04-09 00:55:42.610077 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:55:42.610083 | orchestrator | Thursday 09 April 2026 00:49:52 +0000 (0:00:01.051) 0:04:06.157 ******** 2026-04-09 00:55:42.610089 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.610116 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.610123 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.610129 | orchestrator | 2026-04-09 00:55:42.610136 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:55:42.610142 | orchestrator | Thursday 09 April 2026 00:49:52 +0000 (0:00:00.412) 0:04:06.570 ******** 2026-04-09 00:55:42.610149 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610155 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610162 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610168 | orchestrator | 2026-04-09 00:55:42.610175 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:55:42.610184 | orchestrator | Thursday 09 April 2026 00:49:53 +0000 (0:00:00.455) 0:04:07.026 ******** 2026-04-09 00:55:42.610191 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610197 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610203 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610209 | orchestrator | 2026-04-09 00:55:42.610215 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:55:42.610221 | orchestrator | Thursday 09 April 2026 00:49:53 +0000 (0:00:00.261) 0:04:07.287 ******** 2026-04-09 00:55:42.610227 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610234 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610240 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610246 | orchestrator | 2026-04-09 00:55:42.610253 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:55:42.610259 | orchestrator | Thursday 09 April 2026 00:49:54 +0000 (0:00:00.630) 0:04:07.918 ******** 2026-04-09 00:55:42.610265 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610271 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610277 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610283 | orchestrator | 2026-04-09 00:55:42.610289 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-09 00:55:42.610295 | orchestrator | Thursday 09 April 2026 00:49:54 +0000 (0:00:00.424) 0:04:08.342 ******** 2026-04-09 00:55:42.610302 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.610308 | orchestrator | 2026-04-09 00:55:42.610314 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-09 00:55:42.610321 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:00.948) 0:04:09.291 ******** 2026-04-09 00:55:42.610327 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.610333 | orchestrator | 2026-04-09 00:55:42.610339 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-09 00:55:42.610368 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:00.207) 0:04:09.498 ******** 2026-04-09 00:55:42.610375 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:55:42.610382 | orchestrator | 2026-04-09 00:55:42.610388 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-09 00:55:42.610394 | orchestrator | Thursday 09 April 2026 00:49:57 +0000 (0:00:01.356) 0:04:10.854 ******** 2026-04-09 00:55:42.610400 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610406 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610412 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610418 | orchestrator | 2026-04-09 00:55:42.610424 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-09 00:55:42.610430 | orchestrator | Thursday 09 April 2026 00:49:57 +0000 (0:00:00.725) 0:04:11.580 ******** 2026-04-09 00:55:42.610436 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610442 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610447 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610453 | orchestrator | 2026-04-09 00:55:42.610459 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-09 00:55:42.610465 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:00.363) 0:04:11.943 ******** 2026-04-09 00:55:42.610471 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.610482 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.610489 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.610495 | orchestrator | 2026-04-09 00:55:42.610501 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-09 00:55:42.610507 | orchestrator | Thursday 09 April 2026 00:49:59 +0000 (0:00:01.272) 0:04:13.216 ******** 2026-04-09 00:55:42.610514 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.610520 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.610526 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.610533 | orchestrator | 2026-04-09 00:55:42.610539 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-09 00:55:42.610545 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:01.024) 0:04:14.240 ******** 2026-04-09 00:55:42.610551 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.610558 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.610564 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.610570 | orchestrator | 2026-04-09 00:55:42.610577 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-09 00:55:42.610583 | orchestrator | Thursday 09 April 2026 00:50:01 +0000 (0:00:00.916) 0:04:15.157 ******** 2026-04-09 00:55:42.610590 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610596 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610602 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610608 | orchestrator | 2026-04-09 00:55:42.610615 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-09 00:55:42.610621 | orchestrator | Thursday 09 April 2026 00:50:02 +0000 (0:00:00.719) 0:04:15.877 ******** 2026-04-09 00:55:42.610627 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.610634 | orchestrator | 2026-04-09 00:55:42.610640 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-09 00:55:42.610646 | orchestrator | Thursday 09 April 2026 00:50:03 +0000 (0:00:01.261) 0:04:17.138 ******** 2026-04-09 00:55:42.610653 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610659 | orchestrator | 2026-04-09 00:55:42.610665 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-09 00:55:42.610672 | orchestrator | Thursday 09 April 2026 00:50:04 +0000 (0:00:00.692) 0:04:17.830 ******** 2026-04-09 00:55:42.610678 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:55:42.610684 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.610690 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.610697 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:55:42.610703 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-09 00:55:42.610710 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:55:42.610716 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:55:42.610723 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-09 00:55:42.610733 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:55:42.610739 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-09 00:55:42.610745 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-09 00:55:42.610752 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-09 00:55:42.610758 | orchestrator | 2026-04-09 00:55:42.610764 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-09 00:55:42.610771 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:03.838) 0:04:21.669 ******** 2026-04-09 00:55:42.610778 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.610795 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.610802 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.610809 | orchestrator | 2026-04-09 00:55:42.610815 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-09 00:55:42.610825 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:01.277) 0:04:22.946 ******** 2026-04-09 00:55:42.610831 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610837 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610843 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610849 | orchestrator | 2026-04-09 00:55:42.610855 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-09 00:55:42.610862 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:00.296) 0:04:23.242 ******** 2026-04-09 00:55:42.610868 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.610901 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.610908 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.610914 | orchestrator | 2026-04-09 00:55:42.610920 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-09 00:55:42.610926 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:00.313) 0:04:23.556 ******** 2026-04-09 00:55:42.610930 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.610934 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.610937 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.610941 | orchestrator | 2026-04-09 00:55:42.610966 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-09 00:55:42.610973 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:02.197) 0:04:25.754 ******** 2026-04-09 00:55:42.610979 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.610985 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.610991 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.610997 | orchestrator | 2026-04-09 00:55:42.611003 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-09 00:55:42.611009 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:01.463) 0:04:27.218 ******** 2026-04-09 00:55:42.611015 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611021 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611028 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611034 | orchestrator | 2026-04-09 00:55:42.611040 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-09 00:55:42.611046 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.469) 0:04:27.687 ******** 2026-04-09 00:55:42.611052 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.611058 | orchestrator | 2026-04-09 00:55:42.611064 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-09 00:55:42.611070 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:00.608) 0:04:28.296 ******** 2026-04-09 00:55:42.611076 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611083 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611089 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611095 | orchestrator | 2026-04-09 00:55:42.611102 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-09 00:55:42.611108 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:00.434) 0:04:28.730 ******** 2026-04-09 00:55:42.611114 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611120 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611126 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611132 | orchestrator | 2026-04-09 00:55:42.611138 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-09 00:55:42.611144 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:00.262) 0:04:28.993 ******** 2026-04-09 00:55:42.611150 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.611157 | orchestrator | 2026-04-09 00:55:42.611163 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-09 00:55:42.611169 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:00.453) 0:04:29.446 ******** 2026-04-09 00:55:42.611175 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.611182 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.611192 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.611198 | orchestrator | 2026-04-09 00:55:42.611204 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-09 00:55:42.611211 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:02.211) 0:04:31.658 ******** 2026-04-09 00:55:42.611217 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.611223 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.611229 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.611236 | orchestrator | 2026-04-09 00:55:42.611242 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-09 00:55:42.611248 | orchestrator | Thursday 09 April 2026 00:50:19 +0000 (0:00:01.329) 0:04:32.987 ******** 2026-04-09 00:55:42.611255 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.611260 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.611266 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.611272 | orchestrator | 2026-04-09 00:55:42.611278 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-09 00:55:42.611284 | orchestrator | Thursday 09 April 2026 00:50:21 +0000 (0:00:01.968) 0:04:34.956 ******** 2026-04-09 00:55:42.611290 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.611297 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.611303 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.611310 | orchestrator | 2026-04-09 00:55:42.611320 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-09 00:55:42.611326 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:01.932) 0:04:36.889 ******** 2026-04-09 00:55:42.611333 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.611339 | orchestrator | 2026-04-09 00:55:42.611345 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-09 00:55:42.611352 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:00.824) 0:04:37.713 ******** 2026-04-09 00:55:42.611358 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.611364 | orchestrator | 2026-04-09 00:55:42.611371 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-09 00:55:42.611377 | orchestrator | Thursday 09 April 2026 00:50:24 +0000 (0:00:00.840) 0:04:38.554 ******** 2026-04-09 00:55:42.611383 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.611390 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.611396 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.611403 | orchestrator | 2026-04-09 00:55:42.611409 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-09 00:55:42.611415 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:05.774) 0:04:44.329 ******** 2026-04-09 00:55:42.611422 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611428 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611434 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611441 | orchestrator | 2026-04-09 00:55:42.611447 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-09 00:55:42.611454 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:00.282) 0:04:44.612 ******** 2026-04-09 00:55:42.611483 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ceaa746176b1d10354af1e8bbcc35e7c19ad1c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 00:55:42.611492 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ceaa746176b1d10354af1e8bbcc35e7c19ad1c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 00:55:42.611505 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ceaa746176b1d10354af1e8bbcc35e7c19ad1c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 00:55:42.611513 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ceaa746176b1d10354af1e8bbcc35e7c19ad1c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 00:55:42.611520 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ceaa746176b1d10354af1e8bbcc35e7c19ad1c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 00:55:42.611527 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ceaa746176b1d10354af1e8bbcc35e7c19ad1c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__35ceaa746176b1d10354af1e8bbcc35e7c19ad1c'}])  2026-04-09 00:55:42.611534 | orchestrator | 2026-04-09 00:55:42.611540 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:55:42.611546 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:10.781) 0:04:55.394 ******** 2026-04-09 00:55:42.611553 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611559 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611565 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611571 | orchestrator | 2026-04-09 00:55:42.611577 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 00:55:42.611583 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:00.314) 0:04:55.709 ******** 2026-04-09 00:55:42.611589 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.611595 | orchestrator | 2026-04-09 00:55:42.611604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 00:55:42.611611 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:00.756) 0:04:56.466 ******** 2026-04-09 00:55:42.611617 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.611623 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.611630 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.611636 | orchestrator | 2026-04-09 00:55:42.611642 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 00:55:42.611648 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:00.309) 0:04:56.776 ******** 2026-04-09 00:55:42.611654 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611660 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611666 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611672 | orchestrator | 2026-04-09 00:55:42.611678 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 00:55:42.611685 | orchestrator | Thursday 09 April 2026 00:50:43 +0000 (0:00:00.330) 0:04:57.106 ******** 2026-04-09 00:55:42.611691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:55:42.611697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:55:42.611703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:55:42.611709 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611716 | orchestrator | 2026-04-09 00:55:42.611726 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 00:55:42.611732 | orchestrator | Thursday 09 April 2026 00:50:44 +0000 (0:00:00.834) 0:04:57.941 ******** 2026-04-09 00:55:42.611738 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.611744 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.611751 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.611757 | orchestrator | 2026-04-09 00:55:42.611763 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-09 00:55:42.611769 | orchestrator | 2026-04-09 00:55:42.611775 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:55:42.611800 | orchestrator | Thursday 09 April 2026 00:50:45 +0000 (0:00:00.850) 0:04:58.792 ******** 2026-04-09 00:55:42.611807 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.611814 | orchestrator | 2026-04-09 00:55:42.611820 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:55:42.611826 | orchestrator | Thursday 09 April 2026 00:50:45 +0000 (0:00:00.499) 0:04:59.292 ******** 2026-04-09 00:55:42.611832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.611838 | orchestrator | 2026-04-09 00:55:42.611844 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:55:42.611851 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.727) 0:05:00.019 ******** 2026-04-09 00:55:42.611857 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.611863 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.611879 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.611886 | orchestrator | 2026-04-09 00:55:42.611893 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:55:42.611898 | orchestrator | Thursday 09 April 2026 00:50:47 +0000 (0:00:00.787) 0:05:00.807 ******** 2026-04-09 00:55:42.611901 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611905 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611909 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611913 | orchestrator | 2026-04-09 00:55:42.611917 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:55:42.611920 | orchestrator | Thursday 09 April 2026 00:50:47 +0000 (0:00:00.297) 0:05:01.104 ******** 2026-04-09 00:55:42.611924 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611928 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611932 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611935 | orchestrator | 2026-04-09 00:55:42.611939 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:55:42.611943 | orchestrator | Thursday 09 April 2026 00:50:47 +0000 (0:00:00.286) 0:05:01.390 ******** 2026-04-09 00:55:42.611947 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.611951 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611954 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.611958 | orchestrator | 2026-04-09 00:55:42.611962 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:55:42.611966 | orchestrator | Thursday 09 April 2026 00:50:48 +0000 (0:00:00.618) 0:05:02.009 ******** 2026-04-09 00:55:42.611969 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.611973 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.611977 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.611981 | orchestrator | 2026-04-09 00:55:42.611984 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:55:42.611988 | orchestrator | Thursday 09 April 2026 00:50:49 +0000 (0:00:00.835) 0:05:02.844 ******** 2026-04-09 00:55:42.611992 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.611996 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612000 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612003 | orchestrator | 2026-04-09 00:55:42.612007 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:55:42.612015 | orchestrator | Thursday 09 April 2026 00:50:49 +0000 (0:00:00.337) 0:05:03.182 ******** 2026-04-09 00:55:42.612019 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612023 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612029 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612035 | orchestrator | 2026-04-09 00:55:42.612041 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:55:42.612047 | orchestrator | Thursday 09 April 2026 00:50:49 +0000 (0:00:00.348) 0:05:03.531 ******** 2026-04-09 00:55:42.612054 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612060 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612066 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612073 | orchestrator | 2026-04-09 00:55:42.612079 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:55:42.612088 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:01.032) 0:05:04.563 ******** 2026-04-09 00:55:42.612095 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612101 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612107 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612114 | orchestrator | 2026-04-09 00:55:42.612120 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:55:42.612127 | orchestrator | Thursday 09 April 2026 00:50:51 +0000 (0:00:00.758) 0:05:05.322 ******** 2026-04-09 00:55:42.612133 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612139 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612144 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612150 | orchestrator | 2026-04-09 00:55:42.612157 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:55:42.612163 | orchestrator | Thursday 09 April 2026 00:50:51 +0000 (0:00:00.268) 0:05:05.590 ******** 2026-04-09 00:55:42.612169 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612175 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612181 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612188 | orchestrator | 2026-04-09 00:55:42.612194 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:55:42.612200 | orchestrator | Thursday 09 April 2026 00:50:52 +0000 (0:00:00.261) 0:05:05.852 ******** 2026-04-09 00:55:42.612207 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612220 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612227 | orchestrator | 2026-04-09 00:55:42.612231 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:55:42.612235 | orchestrator | Thursday 09 April 2026 00:50:52 +0000 (0:00:00.229) 0:05:06.082 ******** 2026-04-09 00:55:42.612238 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612242 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612246 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612250 | orchestrator | 2026-04-09 00:55:42.612256 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:55:42.612281 | orchestrator | Thursday 09 April 2026 00:50:52 +0000 (0:00:00.477) 0:05:06.559 ******** 2026-04-09 00:55:42.612289 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612295 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612301 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612308 | orchestrator | 2026-04-09 00:55:42.612314 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:55:42.612320 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:00.276) 0:05:06.836 ******** 2026-04-09 00:55:42.612326 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612332 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612338 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612344 | orchestrator | 2026-04-09 00:55:42.612360 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:55:42.612366 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:00.251) 0:05:07.088 ******** 2026-04-09 00:55:42.612377 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612383 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612390 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612396 | orchestrator | 2026-04-09 00:55:42.612402 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:55:42.612410 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:00.233) 0:05:07.321 ******** 2026-04-09 00:55:42.612414 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612417 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612421 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612425 | orchestrator | 2026-04-09 00:55:42.612429 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:55:42.612433 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:00.450) 0:05:07.772 ******** 2026-04-09 00:55:42.612437 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612441 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612446 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612452 | orchestrator | 2026-04-09 00:55:42.612458 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:55:42.612464 | orchestrator | Thursday 09 April 2026 00:50:54 +0000 (0:00:00.374) 0:05:08.146 ******** 2026-04-09 00:55:42.612470 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612477 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612483 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612489 | orchestrator | 2026-04-09 00:55:42.612496 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:55:42.612502 | orchestrator | Thursday 09 April 2026 00:50:55 +0000 (0:00:00.720) 0:05:08.867 ******** 2026-04-09 00:55:42.612508 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:55:42.612514 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:55:42.612521 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:55:42.612527 | orchestrator | 2026-04-09 00:55:42.612533 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 00:55:42.612539 | orchestrator | Thursday 09 April 2026 00:50:55 +0000 (0:00:00.806) 0:05:09.673 ******** 2026-04-09 00:55:42.612545 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.612552 | orchestrator | 2026-04-09 00:55:42.612558 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-09 00:55:42.612564 | orchestrator | Thursday 09 April 2026 00:50:56 +0000 (0:00:00.546) 0:05:10.220 ******** 2026-04-09 00:55:42.612571 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.612578 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.612584 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.612590 | orchestrator | 2026-04-09 00:55:42.612594 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-09 00:55:42.612597 | orchestrator | Thursday 09 April 2026 00:50:57 +0000 (0:00:00.662) 0:05:10.883 ******** 2026-04-09 00:55:42.612601 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612605 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612609 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612613 | orchestrator | 2026-04-09 00:55:42.612623 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-09 00:55:42.612630 | orchestrator | Thursday 09 April 2026 00:50:57 +0000 (0:00:00.270) 0:05:11.153 ******** 2026-04-09 00:55:42.612636 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:55:42.612641 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:55:42.612647 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:55:42.612652 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-09 00:55:42.612658 | orchestrator | 2026-04-09 00:55:42.612664 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-09 00:55:42.612674 | orchestrator | Thursday 09 April 2026 00:51:05 +0000 (0:00:07.779) 0:05:18.933 ******** 2026-04-09 00:55:42.612680 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612686 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612692 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612698 | orchestrator | 2026-04-09 00:55:42.612705 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-09 00:55:42.612711 | orchestrator | Thursday 09 April 2026 00:51:05 +0000 (0:00:00.523) 0:05:19.456 ******** 2026-04-09 00:55:42.612717 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 00:55:42.612723 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 00:55:42.612729 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 00:55:42.612735 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.612742 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.612748 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 00:55:42.612754 | orchestrator | 2026-04-09 00:55:42.612759 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:55:42.612766 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:01.815) 0:05:21.272 ******** 2026-04-09 00:55:42.612795 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 00:55:42.612801 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 00:55:42.612807 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 00:55:42.612813 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:55:42.612819 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 00:55:42.612826 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 00:55:42.612832 | orchestrator | 2026-04-09 00:55:42.612838 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-09 00:55:42.612844 | orchestrator | Thursday 09 April 2026 00:51:08 +0000 (0:00:01.124) 0:05:22.397 ******** 2026-04-09 00:55:42.612850 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.612857 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.612863 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.612897 | orchestrator | 2026-04-09 00:55:42.612904 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-09 00:55:42.612911 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.742) 0:05:23.139 ******** 2026-04-09 00:55:42.612917 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612923 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612929 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612935 | orchestrator | 2026-04-09 00:55:42.612941 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 00:55:42.612948 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.367) 0:05:23.507 ******** 2026-04-09 00:55:42.612954 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612961 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.612965 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.612969 | orchestrator | 2026-04-09 00:55:42.612972 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 00:55:42.612976 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.223) 0:05:23.731 ******** 2026-04-09 00:55:42.612980 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.612984 | orchestrator | 2026-04-09 00:55:42.612988 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-09 00:55:42.612991 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.453) 0:05:24.185 ******** 2026-04-09 00:55:42.612995 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.612999 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.613003 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.613011 | orchestrator | 2026-04-09 00:55:42.613015 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-09 00:55:42.613018 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.432) 0:05:24.617 ******** 2026-04-09 00:55:42.613022 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.613026 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.613030 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.613034 | orchestrator | 2026-04-09 00:55:42.613037 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-09 00:55:42.613041 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.231) 0:05:24.849 ******** 2026-04-09 00:55:42.613045 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.613049 | orchestrator | 2026-04-09 00:55:42.613053 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-09 00:55:42.613057 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.353) 0:05:25.203 ******** 2026-04-09 00:55:42.613060 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.613064 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.613068 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.613072 | orchestrator | 2026-04-09 00:55:42.613075 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-09 00:55:42.613079 | orchestrator | Thursday 09 April 2026 00:51:12 +0000 (0:00:01.503) 0:05:26.706 ******** 2026-04-09 00:55:42.613083 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.613087 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.613091 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.613094 | orchestrator | 2026-04-09 00:55:42.613101 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-09 00:55:42.613105 | orchestrator | Thursday 09 April 2026 00:51:14 +0000 (0:00:01.259) 0:05:27.966 ******** 2026-04-09 00:55:42.613109 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.613112 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.613116 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.613120 | orchestrator | 2026-04-09 00:55:42.613124 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-09 00:55:42.613128 | orchestrator | Thursday 09 April 2026 00:51:15 +0000 (0:00:01.747) 0:05:29.713 ******** 2026-04-09 00:55:42.613131 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.613135 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.613139 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.613143 | orchestrator | 2026-04-09 00:55:42.613146 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 00:55:42.613150 | orchestrator | Thursday 09 April 2026 00:51:17 +0000 (0:00:01.963) 0:05:31.677 ******** 2026-04-09 00:55:42.613154 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.613158 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.613161 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-09 00:55:42.613165 | orchestrator | 2026-04-09 00:55:42.613169 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-09 00:55:42.613173 | orchestrator | Thursday 09 April 2026 00:51:18 +0000 (0:00:00.596) 0:05:32.273 ******** 2026-04-09 00:55:42.613178 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-09 00:55:42.613185 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-09 00:55:42.613210 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:55:42.613217 | orchestrator | 2026-04-09 00:55:42.613223 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-09 00:55:42.613230 | orchestrator | Thursday 09 April 2026 00:51:31 +0000 (0:00:12.941) 0:05:45.215 ******** 2026-04-09 00:55:42.613236 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:55:42.613247 | orchestrator | 2026-04-09 00:55:42.613251 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-09 00:55:42.613255 | orchestrator | Thursday 09 April 2026 00:51:32 +0000 (0:00:01.370) 0:05:46.585 ******** 2026-04-09 00:55:42.613259 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.613263 | orchestrator | 2026-04-09 00:55:42.613266 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-09 00:55:42.613270 | orchestrator | Thursday 09 April 2026 00:51:33 +0000 (0:00:00.325) 0:05:46.910 ******** 2026-04-09 00:55:42.613274 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.613278 | orchestrator | 2026-04-09 00:55:42.613282 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-09 00:55:42.613285 | orchestrator | Thursday 09 April 2026 00:51:33 +0000 (0:00:00.149) 0:05:47.060 ******** 2026-04-09 00:55:42.613289 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-09 00:55:42.613293 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-09 00:55:42.613297 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-09 00:55:42.613300 | orchestrator | 2026-04-09 00:55:42.613304 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-09 00:55:42.613308 | orchestrator | Thursday 09 April 2026 00:51:39 +0000 (0:00:06.137) 0:05:53.197 ******** 2026-04-09 00:55:42.613312 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-09 00:55:42.613316 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-09 00:55:42.613319 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-09 00:55:42.613323 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-09 00:55:42.613327 | orchestrator | 2026-04-09 00:55:42.613331 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:55:42.613334 | orchestrator | Thursday 09 April 2026 00:51:43 +0000 (0:00:04.463) 0:05:57.660 ******** 2026-04-09 00:55:42.613338 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.613342 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.613346 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.613350 | orchestrator | 2026-04-09 00:55:42.613353 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 00:55:42.613357 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:00.665) 0:05:58.326 ******** 2026-04-09 00:55:42.613361 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:55:42.613365 | orchestrator | 2026-04-09 00:55:42.613369 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 00:55:42.613372 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.461) 0:05:58.788 ******** 2026-04-09 00:55:42.613376 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.613380 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.613384 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.613387 | orchestrator | 2026-04-09 00:55:42.613391 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 00:55:42.613395 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.448) 0:05:59.236 ******** 2026-04-09 00:55:42.613399 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.613403 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.613406 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.613410 | orchestrator | 2026-04-09 00:55:42.613414 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 00:55:42.613418 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:01.596) 0:06:00.833 ******** 2026-04-09 00:55:42.613424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:55:42.613428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:55:42.613432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:55:42.613438 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.613442 | orchestrator | 2026-04-09 00:55:42.613446 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 00:55:42.613450 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:00.556) 0:06:01.390 ******** 2026-04-09 00:55:42.613453 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.613457 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.613461 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.613465 | orchestrator | 2026-04-09 00:55:42.613468 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-09 00:55:42.613472 | orchestrator | 2026-04-09 00:55:42.613476 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:55:42.613480 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:00.514) 0:06:01.905 ******** 2026-04-09 00:55:42.613483 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.613488 | orchestrator | 2026-04-09 00:55:42.613491 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:55:42.613495 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:00.618) 0:06:02.523 ******** 2026-04-09 00:55:42.613499 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.613503 | orchestrator | 2026-04-09 00:55:42.613506 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:55:42.613525 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.611) 0:06:03.134 ******** 2026-04-09 00:55:42.613529 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.613533 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.613537 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.613541 | orchestrator | 2026-04-09 00:55:42.613546 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:55:42.613552 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.346) 0:06:03.480 ******** 2026-04-09 00:55:42.613558 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.613565 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.613571 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.613578 | orchestrator | 2026-04-09 00:55:42.613585 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:55:42.613591 | orchestrator | Thursday 09 April 2026 00:51:50 +0000 (0:00:00.757) 0:06:04.238 ******** 2026-04-09 00:55:42.613598 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.613605 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.613610 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.613616 | orchestrator | 2026-04-09 00:55:42.613622 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:55:42.613628 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:00.704) 0:06:04.942 ******** 2026-04-09 00:55:42.613635 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.613641 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.613647 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.613653 | orchestrator | 2026-04-09 00:55:42.613658 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:55:42.613664 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:00.684) 0:06:05.627 ******** 2026-04-09 00:55:42.613670 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.613676 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.613682 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.613688 | orchestrator | 2026-04-09 00:55:42.613693 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:55:42.613699 | orchestrator | Thursday 09 April 2026 00:51:52 +0000 (0:00:00.399) 0:06:06.027 ******** 2026-04-09 00:55:42.613705 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.613710 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.613723 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.613729 | orchestrator | 2026-04-09 00:55:42.613735 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:55:42.613741 | orchestrator | Thursday 09 April 2026 00:51:52 +0000 (0:00:00.266) 0:06:06.294 ******** 2026-04-09 00:55:42.613746 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.613752 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.613758 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.613764 | orchestrator | 2026-04-09 00:55:42.613771 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:55:42.613777 | orchestrator | Thursday 09 April 2026 00:51:52 +0000 (0:00:00.267) 0:06:06.561 ******** 2026-04-09 00:55:42.613782 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.613787 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.613793 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.613799 | orchestrator | 2026-04-09 00:55:42.613804 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:55:42.613810 | orchestrator | Thursday 09 April 2026 00:51:53 +0000 (0:00:00.663) 0:06:07.224 ******** 2026-04-09 00:55:42.613815 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.613821 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.613827 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.613833 | orchestrator | 2026-04-09 00:55:42.613839 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:55:42.613844 | orchestrator | Thursday 09 April 2026 00:51:54 +0000 (0:00:00.876) 0:06:08.101 ******** 2026-04-09 00:55:42.613850 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.613856 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.613862 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.613868 | orchestrator | 2026-04-09 00:55:42.613886 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:55:42.613893 | orchestrator | Thursday 09 April 2026 00:51:54 +0000 (0:00:00.271) 0:06:08.373 ******** 2026-04-09 00:55:42.613899 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.613905 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.613916 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.613922 | orchestrator | 2026-04-09 00:55:42.613927 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:55:42.613934 | orchestrator | Thursday 09 April 2026 00:51:54 +0000 (0:00:00.251) 0:06:08.625 ******** 2026-04-09 00:55:42.613940 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.613946 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.613952 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.613959 | orchestrator | 2026-04-09 00:55:42.613964 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:55:42.613970 | orchestrator | Thursday 09 April 2026 00:51:55 +0000 (0:00:00.319) 0:06:08.944 ******** 2026-04-09 00:55:42.613976 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.613982 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.613989 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.613995 | orchestrator | 2026-04-09 00:55:42.614001 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:55:42.614007 | orchestrator | Thursday 09 April 2026 00:51:55 +0000 (0:00:00.433) 0:06:09.378 ******** 2026-04-09 00:55:42.614044 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614051 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614057 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614063 | orchestrator | 2026-04-09 00:55:42.614070 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:55:42.614075 | orchestrator | Thursday 09 April 2026 00:51:55 +0000 (0:00:00.379) 0:06:09.758 ******** 2026-04-09 00:55:42.614081 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614087 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.614093 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.614099 | orchestrator | 2026-04-09 00:55:42.614110 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:55:42.614117 | orchestrator | Thursday 09 April 2026 00:51:56 +0000 (0:00:00.315) 0:06:10.073 ******** 2026-04-09 00:55:42.614123 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614129 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.614142 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.614148 | orchestrator | 2026-04-09 00:55:42.614154 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:55:42.614161 | orchestrator | Thursday 09 April 2026 00:51:56 +0000 (0:00:00.298) 0:06:10.371 ******** 2026-04-09 00:55:42.614167 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614173 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.614179 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.614185 | orchestrator | 2026-04-09 00:55:42.614191 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:55:42.614197 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:00.581) 0:06:10.953 ******** 2026-04-09 00:55:42.614203 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614209 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614216 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614222 | orchestrator | 2026-04-09 00:55:42.614228 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:55:42.614234 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:00.305) 0:06:11.259 ******** 2026-04-09 00:55:42.614240 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614246 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614253 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614259 | orchestrator | 2026-04-09 00:55:42.614265 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-09 00:55:42.614271 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:00.534) 0:06:11.794 ******** 2026-04-09 00:55:42.614278 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614284 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614290 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614296 | orchestrator | 2026-04-09 00:55:42.614302 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:55:42.614305 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:00.607) 0:06:12.401 ******** 2026-04-09 00:55:42.614309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:55:42.614313 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:55:42.614317 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:55:42.614321 | orchestrator | 2026-04-09 00:55:42.614324 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-09 00:55:42.614328 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:00.636) 0:06:13.037 ******** 2026-04-09 00:55:42.614332 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.614336 | orchestrator | 2026-04-09 00:55:42.614339 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-09 00:55:42.614343 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:00.508) 0:06:13.546 ******** 2026-04-09 00:55:42.614347 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614350 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.614354 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.614358 | orchestrator | 2026-04-09 00:55:42.614362 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-09 00:55:42.614365 | orchestrator | Thursday 09 April 2026 00:52:00 +0000 (0:00:00.293) 0:06:13.839 ******** 2026-04-09 00:55:42.614369 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614373 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.614384 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.614388 | orchestrator | 2026-04-09 00:55:42.614395 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-09 00:55:42.614399 | orchestrator | Thursday 09 April 2026 00:52:00 +0000 (0:00:00.486) 0:06:14.326 ******** 2026-04-09 00:55:42.614403 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614406 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614410 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614414 | orchestrator | 2026-04-09 00:55:42.614418 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-09 00:55:42.614421 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:00.619) 0:06:14.945 ******** 2026-04-09 00:55:42.614428 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614432 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614436 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614439 | orchestrator | 2026-04-09 00:55:42.614443 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-09 00:55:42.614447 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:00.278) 0:06:15.224 ******** 2026-04-09 00:55:42.614451 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:55:42.614455 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:55:42.614458 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:55:42.614462 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:55:42.614466 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:55:42.614470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:55:42.614473 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:55:42.614477 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:55:42.614481 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:55:42.614485 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:55:42.614488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:55:42.614497 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:55:42.614501 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:55:42.614505 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:55:42.614509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:55:42.614512 | orchestrator | 2026-04-09 00:55:42.614516 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-09 00:55:42.614520 | orchestrator | Thursday 09 April 2026 00:52:04 +0000 (0:00:03.072) 0:06:18.296 ******** 2026-04-09 00:55:42.614523 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614527 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.614531 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.614535 | orchestrator | 2026-04-09 00:55:42.614539 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-09 00:55:42.614542 | orchestrator | Thursday 09 April 2026 00:52:04 +0000 (0:00:00.414) 0:06:18.711 ******** 2026-04-09 00:55:42.614546 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.614550 | orchestrator | 2026-04-09 00:55:42.614554 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-09 00:55:42.614557 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:00.461) 0:06:19.172 ******** 2026-04-09 00:55:42.614561 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:55:42.614568 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:55:42.614572 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:55:42.614576 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-09 00:55:42.614580 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-09 00:55:42.614584 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-09 00:55:42.614589 | orchestrator | 2026-04-09 00:55:42.614595 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-09 00:55:42.614601 | orchestrator | Thursday 09 April 2026 00:52:06 +0000 (0:00:01.049) 0:06:20.221 ******** 2026-04-09 00:55:42.614607 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.614613 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:55:42.614619 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:55:42.614626 | orchestrator | 2026-04-09 00:55:42.614632 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:55:42.614639 | orchestrator | Thursday 09 April 2026 00:52:08 +0000 (0:00:01.882) 0:06:22.103 ******** 2026-04-09 00:55:42.614645 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:55:42.614652 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:55:42.614656 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.614660 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:55:42.614664 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:55:42.614670 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.614676 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:55:42.614682 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:55:42.614688 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.614694 | orchestrator | 2026-04-09 00:55:42.614700 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-09 00:55:42.614707 | orchestrator | Thursday 09 April 2026 00:52:09 +0000 (0:00:01.339) 0:06:23.443 ******** 2026-04-09 00:55:42.614713 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:55:42.614719 | orchestrator | 2026-04-09 00:55:42.614725 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-09 00:55:42.614731 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:01.738) 0:06:25.182 ******** 2026-04-09 00:55:42.614735 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.614739 | orchestrator | 2026-04-09 00:55:42.614743 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-09 00:55:42.614747 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:00.477) 0:06:25.659 ******** 2026-04-09 00:55:42.614751 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d534d538-4d4e-5604-9605-85867297f7ab', 'data_vg': 'ceph-d534d538-4d4e-5604-9605-85867297f7ab'}) 2026-04-09 00:55:42.614755 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a254e30f-06f2-55f8-8a7e-64e382968b4c', 'data_vg': 'ceph-a254e30f-06f2-55f8-8a7e-64e382968b4c'}) 2026-04-09 00:55:42.614759 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d2293633-4853-52c3-92d9-c83407e5923f', 'data_vg': 'ceph-d2293633-4853-52c3-92d9-c83407e5923f'}) 2026-04-09 00:55:42.614763 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a6a3488f-30e9-5ba3-9724-16c1df88c443', 'data_vg': 'ceph-a6a3488f-30e9-5ba3-9724-16c1df88c443'}) 2026-04-09 00:55:42.614766 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6327354e-b41f-514e-b570-068bfc1f3295', 'data_vg': 'ceph-6327354e-b41f-514e-b570-068bfc1f3295'}) 2026-04-09 00:55:42.614770 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9ee08831-7be2-5055-b7bf-21e225eea3cc', 'data_vg': 'ceph-9ee08831-7be2-5055-b7bf-21e225eea3cc'}) 2026-04-09 00:55:42.614777 | orchestrator | 2026-04-09 00:55:42.614784 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-09 00:55:42.614788 | orchestrator | Thursday 09 April 2026 00:52:48 +0000 (0:00:36.955) 0:07:02.615 ******** 2026-04-09 00:55:42.614792 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614796 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.614800 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.614803 | orchestrator | 2026-04-09 00:55:42.614807 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-09 00:55:42.614811 | orchestrator | Thursday 09 April 2026 00:52:49 +0000 (0:00:00.268) 0:07:02.883 ******** 2026-04-09 00:55:42.614815 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.614818 | orchestrator | 2026-04-09 00:55:42.614822 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-09 00:55:42.614826 | orchestrator | Thursday 09 April 2026 00:52:49 +0000 (0:00:00.415) 0:07:03.299 ******** 2026-04-09 00:55:42.614830 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614834 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614837 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614841 | orchestrator | 2026-04-09 00:55:42.614845 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-09 00:55:42.614849 | orchestrator | Thursday 09 April 2026 00:52:50 +0000 (0:00:00.694) 0:07:03.993 ******** 2026-04-09 00:55:42.614853 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.614856 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.614860 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.614864 | orchestrator | 2026-04-09 00:55:42.614868 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-09 00:55:42.614886 | orchestrator | Thursday 09 April 2026 00:52:51 +0000 (0:00:01.459) 0:07:05.453 ******** 2026-04-09 00:55:42.614892 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.614900 | orchestrator | 2026-04-09 00:55:42.614904 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-09 00:55:42.614908 | orchestrator | Thursday 09 April 2026 00:52:52 +0000 (0:00:00.367) 0:07:05.820 ******** 2026-04-09 00:55:42.614911 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.614915 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.614919 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.614923 | orchestrator | 2026-04-09 00:55:42.614927 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-09 00:55:42.614930 | orchestrator | Thursday 09 April 2026 00:52:53 +0000 (0:00:01.336) 0:07:07.157 ******** 2026-04-09 00:55:42.614934 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.614938 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.614942 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.614946 | orchestrator | 2026-04-09 00:55:42.614949 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-09 00:55:42.614953 | orchestrator | Thursday 09 April 2026 00:52:54 +0000 (0:00:01.367) 0:07:08.525 ******** 2026-04-09 00:55:42.614957 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.614961 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.614965 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.614971 | orchestrator | 2026-04-09 00:55:42.614977 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-09 00:55:42.614982 | orchestrator | Thursday 09 April 2026 00:52:56 +0000 (0:00:01.756) 0:07:10.281 ******** 2026-04-09 00:55:42.614987 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.614995 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615004 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615009 | orchestrator | 2026-04-09 00:55:42.615015 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-09 00:55:42.615032 | orchestrator | Thursday 09 April 2026 00:52:56 +0000 (0:00:00.261) 0:07:10.542 ******** 2026-04-09 00:55:42.615040 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615046 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615052 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615058 | orchestrator | 2026-04-09 00:55:42.615068 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-09 00:55:42.615074 | orchestrator | Thursday 09 April 2026 00:52:57 +0000 (0:00:00.276) 0:07:10.819 ******** 2026-04-09 00:55:42.615080 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:55:42.615084 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-09 00:55:42.615088 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-09 00:55:42.615092 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-09 00:55:42.615096 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-09 00:55:42.615099 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-09 00:55:42.615103 | orchestrator | 2026-04-09 00:55:42.615107 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-09 00:55:42.615111 | orchestrator | Thursday 09 April 2026 00:52:58 +0000 (0:00:01.153) 0:07:11.972 ******** 2026-04-09 00:55:42.615115 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-09 00:55:42.615119 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-09 00:55:42.615122 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-09 00:55:42.615126 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-09 00:55:42.615130 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 00:55:42.615134 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 00:55:42.615138 | orchestrator | 2026-04-09 00:55:42.615141 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-09 00:55:42.615145 | orchestrator | Thursday 09 April 2026 00:53:00 +0000 (0:00:02.220) 0:07:14.192 ******** 2026-04-09 00:55:42.615149 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-09 00:55:42.615153 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-09 00:55:42.615157 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-09 00:55:42.615160 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-09 00:55:42.615164 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 00:55:42.615168 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 00:55:42.615172 | orchestrator | 2026-04-09 00:55:42.615180 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-09 00:55:42.615184 | orchestrator | Thursday 09 April 2026 00:53:04 +0000 (0:00:03.977) 0:07:18.169 ******** 2026-04-09 00:55:42.615188 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615192 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615195 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:55:42.615199 | orchestrator | 2026-04-09 00:55:42.615203 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-09 00:55:42.615207 | orchestrator | Thursday 09 April 2026 00:53:06 +0000 (0:00:02.093) 0:07:20.263 ******** 2026-04-09 00:55:42.615211 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615215 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615219 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-09 00:55:42.615223 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:55:42.615227 | orchestrator | 2026-04-09 00:55:42.615231 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-09 00:55:42.615234 | orchestrator | Thursday 09 April 2026 00:53:19 +0000 (0:00:12.702) 0:07:32.966 ******** 2026-04-09 00:55:42.615238 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615242 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615246 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615250 | orchestrator | 2026-04-09 00:55:42.615253 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:55:42.615260 | orchestrator | Thursday 09 April 2026 00:53:20 +0000 (0:00:00.838) 0:07:33.804 ******** 2026-04-09 00:55:42.615264 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615268 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615272 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615275 | orchestrator | 2026-04-09 00:55:42.615279 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 00:55:42.615283 | orchestrator | Thursday 09 April 2026 00:53:20 +0000 (0:00:00.680) 0:07:34.485 ******** 2026-04-09 00:55:42.615287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.615291 | orchestrator | 2026-04-09 00:55:42.615295 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 00:55:42.615298 | orchestrator | Thursday 09 April 2026 00:53:21 +0000 (0:00:00.510) 0:07:34.995 ******** 2026-04-09 00:55:42.615302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.615306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.615310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.615314 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615319 | orchestrator | 2026-04-09 00:55:42.615326 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 00:55:42.615335 | orchestrator | Thursday 09 April 2026 00:53:21 +0000 (0:00:00.375) 0:07:35.370 ******** 2026-04-09 00:55:42.615341 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615347 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615352 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615358 | orchestrator | 2026-04-09 00:55:42.615364 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 00:55:42.615370 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.540) 0:07:35.910 ******** 2026-04-09 00:55:42.615377 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615383 | orchestrator | 2026-04-09 00:55:42.615389 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 00:55:42.615396 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.204) 0:07:36.115 ******** 2026-04-09 00:55:42.615400 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615404 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615408 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615415 | orchestrator | 2026-04-09 00:55:42.615421 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 00:55:42.615430 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.297) 0:07:36.413 ******** 2026-04-09 00:55:42.615436 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615442 | orchestrator | 2026-04-09 00:55:42.615448 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 00:55:42.615454 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.268) 0:07:36.681 ******** 2026-04-09 00:55:42.615459 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615465 | orchestrator | 2026-04-09 00:55:42.615471 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 00:55:42.615477 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:00.240) 0:07:36.921 ******** 2026-04-09 00:55:42.615483 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615490 | orchestrator | 2026-04-09 00:55:42.615497 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 00:55:42.615504 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:00.116) 0:07:37.038 ******** 2026-04-09 00:55:42.615510 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615517 | orchestrator | 2026-04-09 00:55:42.615523 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 00:55:42.615530 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:00.197) 0:07:37.236 ******** 2026-04-09 00:55:42.615536 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615547 | orchestrator | 2026-04-09 00:55:42.615554 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 00:55:42.615560 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:00.211) 0:07:37.447 ******** 2026-04-09 00:55:42.615566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.615572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.615579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.615585 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615601 | orchestrator | 2026-04-09 00:55:42.615615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 00:55:42.615621 | orchestrator | Thursday 09 April 2026 00:53:24 +0000 (0:00:00.701) 0:07:38.149 ******** 2026-04-09 00:55:42.615627 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615633 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615639 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615645 | orchestrator | 2026-04-09 00:55:42.615652 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 00:55:42.615658 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:00.682) 0:07:38.831 ******** 2026-04-09 00:55:42.615665 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615672 | orchestrator | 2026-04-09 00:55:42.615678 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 00:55:42.615685 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:00.275) 0:07:39.106 ******** 2026-04-09 00:55:42.615692 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615700 | orchestrator | 2026-04-09 00:55:42.615704 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-09 00:55:42.615708 | orchestrator | 2026-04-09 00:55:42.615712 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:55:42.615716 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:00.656) 0:07:39.762 ******** 2026-04-09 00:55:42.615720 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.615725 | orchestrator | 2026-04-09 00:55:42.615728 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:55:42.615732 | orchestrator | Thursday 09 April 2026 00:53:27 +0000 (0:00:01.235) 0:07:40.998 ******** 2026-04-09 00:55:42.615736 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.615740 | orchestrator | 2026-04-09 00:55:42.615744 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:55:42.615748 | orchestrator | Thursday 09 April 2026 00:53:28 +0000 (0:00:01.133) 0:07:42.132 ******** 2026-04-09 00:55:42.615751 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615755 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.615759 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.615763 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615767 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.615771 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615775 | orchestrator | 2026-04-09 00:55:42.615779 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:55:42.615782 | orchestrator | Thursday 09 April 2026 00:53:29 +0000 (0:00:00.803) 0:07:42.935 ******** 2026-04-09 00:55:42.615786 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.615790 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.615794 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.615798 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.615801 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.615805 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.615809 | orchestrator | 2026-04-09 00:55:42.615813 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:55:42.615829 | orchestrator | Thursday 09 April 2026 00:53:30 +0000 (0:00:01.041) 0:07:43.977 ******** 2026-04-09 00:55:42.615832 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.615836 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.615840 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.615844 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.615847 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.615851 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.615855 | orchestrator | 2026-04-09 00:55:42.615859 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:55:42.615862 | orchestrator | Thursday 09 April 2026 00:53:31 +0000 (0:00:01.242) 0:07:45.219 ******** 2026-04-09 00:55:42.615866 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.615879 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.615886 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.615890 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.615894 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.615897 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.615901 | orchestrator | 2026-04-09 00:55:42.615905 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:55:42.615909 | orchestrator | Thursday 09 April 2026 00:53:32 +0000 (0:00:01.008) 0:07:46.228 ******** 2026-04-09 00:55:42.615912 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615916 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615920 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.615924 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.615928 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.615931 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615935 | orchestrator | 2026-04-09 00:55:42.615939 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:55:42.615943 | orchestrator | Thursday 09 April 2026 00:53:33 +0000 (0:00:00.911) 0:07:47.139 ******** 2026-04-09 00:55:42.615946 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.615950 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.615954 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.615958 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615961 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615965 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.615969 | orchestrator | 2026-04-09 00:55:42.615973 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:55:42.615977 | orchestrator | Thursday 09 April 2026 00:53:33 +0000 (0:00:00.609) 0:07:47.748 ******** 2026-04-09 00:55:42.615980 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.615984 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.615988 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.615992 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.615996 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.615999 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.616003 | orchestrator | 2026-04-09 00:55:42.616010 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:55:42.616014 | orchestrator | Thursday 09 April 2026 00:53:34 +0000 (0:00:00.788) 0:07:48.537 ******** 2026-04-09 00:55:42.616018 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616022 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.616026 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.616029 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616033 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616037 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616041 | orchestrator | 2026-04-09 00:55:42.616044 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:55:42.616048 | orchestrator | Thursday 09 April 2026 00:53:35 +0000 (0:00:01.171) 0:07:49.708 ******** 2026-04-09 00:55:42.616052 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616058 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.616062 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.616066 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616070 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616073 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616077 | orchestrator | 2026-04-09 00:55:42.616081 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:55:42.616085 | orchestrator | Thursday 09 April 2026 00:53:37 +0000 (0:00:01.241) 0:07:50.949 ******** 2026-04-09 00:55:42.616089 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.616092 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.616096 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.616100 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.616104 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.616107 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.616111 | orchestrator | 2026-04-09 00:55:42.616115 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:55:42.616119 | orchestrator | Thursday 09 April 2026 00:53:37 +0000 (0:00:00.573) 0:07:51.523 ******** 2026-04-09 00:55:42.616123 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616126 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.616130 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.616134 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.616138 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.616142 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.616145 | orchestrator | 2026-04-09 00:55:42.616149 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:55:42.616153 | orchestrator | Thursday 09 April 2026 00:53:38 +0000 (0:00:00.764) 0:07:52.287 ******** 2026-04-09 00:55:42.616157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.616160 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.616164 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.616168 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616172 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616175 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616179 | orchestrator | 2026-04-09 00:55:42.616183 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:55:42.616187 | orchestrator | Thursday 09 April 2026 00:53:39 +0000 (0:00:00.586) 0:07:52.874 ******** 2026-04-09 00:55:42.616191 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.616194 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.616198 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.616202 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616206 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616210 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616213 | orchestrator | 2026-04-09 00:55:42.616217 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:55:42.616221 | orchestrator | Thursday 09 April 2026 00:53:39 +0000 (0:00:00.550) 0:07:53.425 ******** 2026-04-09 00:55:42.616225 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.616228 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.616232 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.616236 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616240 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616243 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616247 | orchestrator | 2026-04-09 00:55:42.616251 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:55:42.616255 | orchestrator | Thursday 09 April 2026 00:53:40 +0000 (0:00:00.937) 0:07:54.363 ******** 2026-04-09 00:55:42.616259 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.616263 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.616266 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.616270 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.616274 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.616278 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.616284 | orchestrator | 2026-04-09 00:55:42.616288 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:55:42.616292 | orchestrator | Thursday 09 April 2026 00:53:41 +0000 (0:00:00.510) 0:07:54.873 ******** 2026-04-09 00:55:42.616295 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:55:42.616299 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:55:42.616303 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:55:42.616307 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.616310 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.616314 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.616318 | orchestrator | 2026-04-09 00:55:42.616322 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:55:42.616325 | orchestrator | Thursday 09 April 2026 00:53:41 +0000 (0:00:00.760) 0:07:55.634 ******** 2026-04-09 00:55:42.616329 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616333 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.616337 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.616341 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.616348 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.616354 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.616363 | orchestrator | 2026-04-09 00:55:42.616369 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:55:42.616375 | orchestrator | Thursday 09 April 2026 00:53:42 +0000 (0:00:00.535) 0:07:56.170 ******** 2026-04-09 00:55:42.616381 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616436 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.616450 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.616453 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616457 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616461 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616465 | orchestrator | 2026-04-09 00:55:42.616473 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:55:42.616480 | orchestrator | Thursday 09 April 2026 00:53:43 +0000 (0:00:00.839) 0:07:57.009 ******** 2026-04-09 00:55:42.616486 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616492 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.616499 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.616505 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616511 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616517 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616523 | orchestrator | 2026-04-09 00:55:42.616528 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-09 00:55:42.616538 | orchestrator | Thursday 09 April 2026 00:53:44 +0000 (0:00:01.140) 0:07:58.149 ******** 2026-04-09 00:55:42.616544 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.616550 | orchestrator | 2026-04-09 00:55:42.616556 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-09 00:55:42.616561 | orchestrator | Thursday 09 April 2026 00:53:47 +0000 (0:00:03.154) 0:08:01.304 ******** 2026-04-09 00:55:42.616568 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616574 | orchestrator | 2026-04-09 00:55:42.616580 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-09 00:55:42.616585 | orchestrator | Thursday 09 April 2026 00:53:49 +0000 (0:00:01.676) 0:08:02.981 ******** 2026-04-09 00:55:42.616591 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616598 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.616603 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.616609 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.616614 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.616621 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.616627 | orchestrator | 2026-04-09 00:55:42.616633 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-09 00:55:42.616639 | orchestrator | Thursday 09 April 2026 00:53:50 +0000 (0:00:01.663) 0:08:04.644 ******** 2026-04-09 00:55:42.616645 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.616657 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.616663 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.616669 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.616675 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.616681 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.616686 | orchestrator | 2026-04-09 00:55:42.616692 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-09 00:55:42.616698 | orchestrator | Thursday 09 April 2026 00:53:51 +0000 (0:00:01.101) 0:08:05.746 ******** 2026-04-09 00:55:42.616705 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.616713 | orchestrator | 2026-04-09 00:55:42.616720 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-09 00:55:42.616727 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:01.199) 0:08:06.946 ******** 2026-04-09 00:55:42.616733 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.616739 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.616746 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.616752 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.616759 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.616765 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.616772 | orchestrator | 2026-04-09 00:55:42.616777 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-09 00:55:42.616781 | orchestrator | Thursday 09 April 2026 00:53:55 +0000 (0:00:01.912) 0:08:08.859 ******** 2026-04-09 00:55:42.616785 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.616789 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.616792 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.616796 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.616800 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.616806 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.616812 | orchestrator | 2026-04-09 00:55:42.616818 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-09 00:55:42.616824 | orchestrator | Thursday 09 April 2026 00:53:58 +0000 (0:00:03.141) 0:08:12.000 ******** 2026-04-09 00:55:42.616833 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.616840 | orchestrator | 2026-04-09 00:55:42.616846 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-09 00:55:42.616852 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:01.022) 0:08:13.023 ******** 2026-04-09 00:55:42.616858 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.616864 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.616898 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.616905 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.616911 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.616917 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.616923 | orchestrator | 2026-04-09 00:55:42.616930 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-09 00:55:42.616936 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:00.538) 0:08:13.561 ******** 2026-04-09 00:55:42.616942 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:55:42.616948 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:55:42.616954 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:55:42.616961 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.616967 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.616973 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.616980 | orchestrator | 2026-04-09 00:55:42.616986 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-09 00:55:42.616992 | orchestrator | Thursday 09 April 2026 00:54:01 +0000 (0:00:02.189) 0:08:15.751 ******** 2026-04-09 00:55:42.616998 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:55:42.617010 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:55:42.617016 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:55:42.617022 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617028 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617045 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617052 | orchestrator | 2026-04-09 00:55:42.617063 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-09 00:55:42.617070 | orchestrator | 2026-04-09 00:55:42.617076 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:55:42.617082 | orchestrator | Thursday 09 April 2026 00:54:02 +0000 (0:00:01.028) 0:08:16.780 ******** 2026-04-09 00:55:42.617089 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.617095 | orchestrator | 2026-04-09 00:55:42.617101 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:55:42.617107 | orchestrator | Thursday 09 April 2026 00:54:03 +0000 (0:00:00.495) 0:08:17.275 ******** 2026-04-09 00:55:42.617113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.617120 | orchestrator | 2026-04-09 00:55:42.617126 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:55:42.617132 | orchestrator | Thursday 09 April 2026 00:54:04 +0000 (0:00:00.714) 0:08:17.989 ******** 2026-04-09 00:55:42.617138 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617143 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617150 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617156 | orchestrator | 2026-04-09 00:55:42.617162 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:55:42.617168 | orchestrator | Thursday 09 April 2026 00:54:04 +0000 (0:00:00.298) 0:08:18.288 ******** 2026-04-09 00:55:42.617174 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617180 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617186 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617193 | orchestrator | 2026-04-09 00:55:42.617199 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:55:42.617205 | orchestrator | Thursday 09 April 2026 00:54:05 +0000 (0:00:00.693) 0:08:18.982 ******** 2026-04-09 00:55:42.617212 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617218 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617224 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617230 | orchestrator | 2026-04-09 00:55:42.617237 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:55:42.617243 | orchestrator | Thursday 09 April 2026 00:54:05 +0000 (0:00:00.688) 0:08:19.670 ******** 2026-04-09 00:55:42.617249 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617255 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617261 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617266 | orchestrator | 2026-04-09 00:55:42.617272 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:55:42.617279 | orchestrator | Thursday 09 April 2026 00:54:06 +0000 (0:00:01.030) 0:08:20.701 ******** 2026-04-09 00:55:42.617284 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617290 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617297 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617303 | orchestrator | 2026-04-09 00:55:42.617309 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:55:42.617316 | orchestrator | Thursday 09 April 2026 00:54:07 +0000 (0:00:00.299) 0:08:21.001 ******** 2026-04-09 00:55:42.617322 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617329 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617335 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617341 | orchestrator | 2026-04-09 00:55:42.617348 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:55:42.617359 | orchestrator | Thursday 09 April 2026 00:54:07 +0000 (0:00:00.294) 0:08:21.295 ******** 2026-04-09 00:55:42.617365 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617372 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617378 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617384 | orchestrator | 2026-04-09 00:55:42.617391 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:55:42.617397 | orchestrator | Thursday 09 April 2026 00:54:07 +0000 (0:00:00.299) 0:08:21.595 ******** 2026-04-09 00:55:42.617403 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617409 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617416 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617422 | orchestrator | 2026-04-09 00:55:42.617432 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:55:42.617439 | orchestrator | Thursday 09 April 2026 00:54:08 +0000 (0:00:01.151) 0:08:22.747 ******** 2026-04-09 00:55:42.617445 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617451 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617457 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617463 | orchestrator | 2026-04-09 00:55:42.617469 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:55:42.617476 | orchestrator | Thursday 09 April 2026 00:54:09 +0000 (0:00:00.681) 0:08:23.429 ******** 2026-04-09 00:55:42.617482 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617488 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617495 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617501 | orchestrator | 2026-04-09 00:55:42.617507 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:55:42.617514 | orchestrator | Thursday 09 April 2026 00:54:09 +0000 (0:00:00.314) 0:08:23.743 ******** 2026-04-09 00:55:42.617520 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617526 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617532 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617539 | orchestrator | 2026-04-09 00:55:42.617545 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:55:42.617551 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:00.257) 0:08:24.001 ******** 2026-04-09 00:55:42.617558 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617564 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617570 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617577 | orchestrator | 2026-04-09 00:55:42.617583 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:55:42.617589 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:00.519) 0:08:24.521 ******** 2026-04-09 00:55:42.617597 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617604 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617610 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617617 | orchestrator | 2026-04-09 00:55:42.617627 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:55:42.617634 | orchestrator | Thursday 09 April 2026 00:54:11 +0000 (0:00:00.302) 0:08:24.823 ******** 2026-04-09 00:55:42.617640 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617647 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617653 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617659 | orchestrator | 2026-04-09 00:55:42.617665 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:55:42.617671 | orchestrator | Thursday 09 April 2026 00:54:11 +0000 (0:00:00.272) 0:08:25.096 ******** 2026-04-09 00:55:42.617677 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617684 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617690 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617696 | orchestrator | 2026-04-09 00:55:42.617702 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:55:42.617708 | orchestrator | Thursday 09 April 2026 00:54:11 +0000 (0:00:00.253) 0:08:25.350 ******** 2026-04-09 00:55:42.617714 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617723 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617730 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617736 | orchestrator | 2026-04-09 00:55:42.617742 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:55:42.617748 | orchestrator | Thursday 09 April 2026 00:54:11 +0000 (0:00:00.418) 0:08:25.768 ******** 2026-04-09 00:55:42.617755 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617761 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617768 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617774 | orchestrator | 2026-04-09 00:55:42.617780 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:55:42.617786 | orchestrator | Thursday 09 April 2026 00:54:12 +0000 (0:00:00.271) 0:08:26.040 ******** 2026-04-09 00:55:42.617792 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617798 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617804 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617811 | orchestrator | 2026-04-09 00:55:42.617817 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:55:42.617823 | orchestrator | Thursday 09 April 2026 00:54:12 +0000 (0:00:00.295) 0:08:26.335 ******** 2026-04-09 00:55:42.617829 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.617836 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.617842 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.617848 | orchestrator | 2026-04-09 00:55:42.617854 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-09 00:55:42.617860 | orchestrator | Thursday 09 April 2026 00:54:13 +0000 (0:00:00.716) 0:08:27.051 ******** 2026-04-09 00:55:42.617867 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.617887 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.617894 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-09 00:55:42.617908 | orchestrator | 2026-04-09 00:55:42.617915 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-09 00:55:42.617921 | orchestrator | Thursday 09 April 2026 00:54:13 +0000 (0:00:00.372) 0:08:27.424 ******** 2026-04-09 00:55:42.617927 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:55:42.617933 | orchestrator | 2026-04-09 00:55:42.617939 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-09 00:55:42.617946 | orchestrator | Thursday 09 April 2026 00:54:15 +0000 (0:00:01.665) 0:08:29.090 ******** 2026-04-09 00:55:42.617953 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-09 00:55:42.617960 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.617967 | orchestrator | 2026-04-09 00:55:42.617973 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-09 00:55:42.617978 | orchestrator | Thursday 09 April 2026 00:54:15 +0000 (0:00:00.188) 0:08:29.278 ******** 2026-04-09 00:55:42.617989 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:55:42.617999 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:55:42.618005 | orchestrator | 2026-04-09 00:55:42.618050 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-09 00:55:42.618059 | orchestrator | Thursday 09 April 2026 00:54:21 +0000 (0:00:06.340) 0:08:35.618 ******** 2026-04-09 00:55:42.618066 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:55:42.618078 | orchestrator | 2026-04-09 00:55:42.618084 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-09 00:55:42.618091 | orchestrator | Thursday 09 April 2026 00:54:24 +0000 (0:00:02.685) 0:08:38.304 ******** 2026-04-09 00:55:42.618098 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.618105 | orchestrator | 2026-04-09 00:55:42.618112 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-09 00:55:42.618119 | orchestrator | Thursday 09 April 2026 00:54:25 +0000 (0:00:00.792) 0:08:39.096 ******** 2026-04-09 00:55:42.618126 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:55:42.618138 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:55:42.618145 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:55:42.618152 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-09 00:55:42.618159 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-09 00:55:42.618166 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-09 00:55:42.618172 | orchestrator | 2026-04-09 00:55:42.618179 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-09 00:55:42.618185 | orchestrator | Thursday 09 April 2026 00:54:26 +0000 (0:00:01.048) 0:08:40.145 ******** 2026-04-09 00:55:42.618191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.618198 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:55:42.618205 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:55:42.618211 | orchestrator | 2026-04-09 00:55:42.618217 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:55:42.618224 | orchestrator | Thursday 09 April 2026 00:54:27 +0000 (0:00:01.524) 0:08:41.670 ******** 2026-04-09 00:55:42.618231 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:55:42.618239 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:55:42.618246 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618253 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:55:42.618259 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:55:42.618265 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618271 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:55:42.618277 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:55:42.618283 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618289 | orchestrator | 2026-04-09 00:55:42.618295 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-09 00:55:42.618301 | orchestrator | Thursday 09 April 2026 00:54:29 +0000 (0:00:01.139) 0:08:42.809 ******** 2026-04-09 00:55:42.618307 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618313 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618319 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618325 | orchestrator | 2026-04-09 00:55:42.618331 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-09 00:55:42.618337 | orchestrator | Thursday 09 April 2026 00:54:31 +0000 (0:00:02.184) 0:08:44.994 ******** 2026-04-09 00:55:42.618344 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.618351 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.618358 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.618364 | orchestrator | 2026-04-09 00:55:42.618371 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-09 00:55:42.618377 | orchestrator | Thursday 09 April 2026 00:54:31 +0000 (0:00:00.279) 0:08:45.274 ******** 2026-04-09 00:55:42.618384 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.618398 | orchestrator | 2026-04-09 00:55:42.618404 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-09 00:55:42.618411 | orchestrator | Thursday 09 April 2026 00:54:31 +0000 (0:00:00.503) 0:08:45.778 ******** 2026-04-09 00:55:42.618417 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.618424 | orchestrator | 2026-04-09 00:55:42.618430 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-09 00:55:42.618436 | orchestrator | Thursday 09 April 2026 00:54:32 +0000 (0:00:00.723) 0:08:46.502 ******** 2026-04-09 00:55:42.618443 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618449 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618455 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618461 | orchestrator | 2026-04-09 00:55:42.618467 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-09 00:55:42.618477 | orchestrator | Thursday 09 April 2026 00:54:33 +0000 (0:00:01.232) 0:08:47.735 ******** 2026-04-09 00:55:42.618484 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618490 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618497 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618503 | orchestrator | 2026-04-09 00:55:42.618509 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-09 00:55:42.618516 | orchestrator | Thursday 09 April 2026 00:54:35 +0000 (0:00:01.123) 0:08:48.858 ******** 2026-04-09 00:55:42.618522 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618528 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618535 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618541 | orchestrator | 2026-04-09 00:55:42.618547 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-09 00:55:42.618554 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:02.153) 0:08:51.011 ******** 2026-04-09 00:55:42.618560 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618566 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618573 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618581 | orchestrator | 2026-04-09 00:55:42.618587 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-09 00:55:42.618593 | orchestrator | Thursday 09 April 2026 00:54:39 +0000 (0:00:02.183) 0:08:53.195 ******** 2026-04-09 00:55:42.618600 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.618607 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.618613 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.618621 | orchestrator | 2026-04-09 00:55:42.618627 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:55:42.618633 | orchestrator | Thursday 09 April 2026 00:54:40 +0000 (0:00:01.477) 0:08:54.673 ******** 2026-04-09 00:55:42.618639 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618645 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618652 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618659 | orchestrator | 2026-04-09 00:55:42.618670 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 00:55:42.618676 | orchestrator | Thursday 09 April 2026 00:54:41 +0000 (0:00:00.657) 0:08:55.331 ******** 2026-04-09 00:55:42.618683 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.618690 | orchestrator | 2026-04-09 00:55:42.618696 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 00:55:42.618703 | orchestrator | Thursday 09 April 2026 00:54:42 +0000 (0:00:00.492) 0:08:55.823 ******** 2026-04-09 00:55:42.618710 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.618717 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.618723 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.618730 | orchestrator | 2026-04-09 00:55:42.618736 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 00:55:42.618743 | orchestrator | Thursday 09 April 2026 00:54:42 +0000 (0:00:00.540) 0:08:56.363 ******** 2026-04-09 00:55:42.618754 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.618761 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.618767 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.618774 | orchestrator | 2026-04-09 00:55:42.618781 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 00:55:42.618787 | orchestrator | Thursday 09 April 2026 00:54:43 +0000 (0:00:01.189) 0:08:57.553 ******** 2026-04-09 00:55:42.618793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.618799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.618806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.618813 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.618819 | orchestrator | 2026-04-09 00:55:42.618826 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 00:55:42.618832 | orchestrator | Thursday 09 April 2026 00:54:44 +0000 (0:00:00.583) 0:08:58.136 ******** 2026-04-09 00:55:42.618839 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.618845 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.618851 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.618857 | orchestrator | 2026-04-09 00:55:42.618863 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-09 00:55:42.618880 | orchestrator | 2026-04-09 00:55:42.618887 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:55:42.618893 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.794) 0:08:58.931 ******** 2026-04-09 00:55:42.618899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.618905 | orchestrator | 2026-04-09 00:55:42.618912 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:55:42.618918 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.496) 0:08:59.427 ******** 2026-04-09 00:55:42.618924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.618931 | orchestrator | 2026-04-09 00:55:42.618936 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:55:42.618941 | orchestrator | Thursday 09 April 2026 00:54:46 +0000 (0:00:00.496) 0:08:59.924 ******** 2026-04-09 00:55:42.618947 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.618952 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.618958 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.618964 | orchestrator | 2026-04-09 00:55:42.618970 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:55:42.618977 | orchestrator | Thursday 09 April 2026 00:54:46 +0000 (0:00:00.558) 0:09:00.482 ******** 2026-04-09 00:55:42.618983 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.618989 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.618995 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619002 | orchestrator | 2026-04-09 00:55:42.619008 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:55:42.619018 | orchestrator | Thursday 09 April 2026 00:54:47 +0000 (0:00:00.654) 0:09:01.137 ******** 2026-04-09 00:55:42.619024 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619030 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619036 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619043 | orchestrator | 2026-04-09 00:55:42.619049 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:55:42.619055 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:00.711) 0:09:01.848 ******** 2026-04-09 00:55:42.619061 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619067 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619074 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619080 | orchestrator | 2026-04-09 00:55:42.619086 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:55:42.619097 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:00.606) 0:09:02.455 ******** 2026-04-09 00:55:42.619103 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619109 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619118 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619124 | orchestrator | 2026-04-09 00:55:42.619130 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:55:42.619136 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.598) 0:09:03.053 ******** 2026-04-09 00:55:42.619142 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619149 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619156 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619162 | orchestrator | 2026-04-09 00:55:42.619168 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:55:42.619183 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.293) 0:09:03.347 ******** 2026-04-09 00:55:42.619187 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619191 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619195 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619199 | orchestrator | 2026-04-09 00:55:42.619203 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:55:42.619210 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.272) 0:09:03.619 ******** 2026-04-09 00:55:42.619214 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619218 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619222 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619226 | orchestrator | 2026-04-09 00:55:42.619230 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:55:42.619233 | orchestrator | Thursday 09 April 2026 00:54:50 +0000 (0:00:00.705) 0:09:04.324 ******** 2026-04-09 00:55:42.619237 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619241 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619245 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619248 | orchestrator | 2026-04-09 00:55:42.619252 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:55:42.619256 | orchestrator | Thursday 09 April 2026 00:54:51 +0000 (0:00:00.998) 0:09:05.323 ******** 2026-04-09 00:55:42.619260 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619264 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619267 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619271 | orchestrator | 2026-04-09 00:55:42.619275 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:55:42.619279 | orchestrator | Thursday 09 April 2026 00:54:51 +0000 (0:00:00.328) 0:09:05.652 ******** 2026-04-09 00:55:42.619282 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619286 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619290 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619296 | orchestrator | 2026-04-09 00:55:42.619302 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:55:42.619309 | orchestrator | Thursday 09 April 2026 00:54:52 +0000 (0:00:00.334) 0:09:05.987 ******** 2026-04-09 00:55:42.619315 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619322 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619328 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619333 | orchestrator | 2026-04-09 00:55:42.619336 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:55:42.619341 | orchestrator | Thursday 09 April 2026 00:54:52 +0000 (0:00:00.347) 0:09:06.334 ******** 2026-04-09 00:55:42.619347 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619354 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619360 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619367 | orchestrator | 2026-04-09 00:55:42.619372 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:55:42.619376 | orchestrator | Thursday 09 April 2026 00:54:53 +0000 (0:00:00.663) 0:09:06.998 ******** 2026-04-09 00:55:42.619386 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619389 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619393 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619397 | orchestrator | 2026-04-09 00:55:42.619401 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:55:42.619405 | orchestrator | Thursday 09 April 2026 00:54:53 +0000 (0:00:00.372) 0:09:07.370 ******** 2026-04-09 00:55:42.619408 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619412 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619416 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619420 | orchestrator | 2026-04-09 00:55:42.619424 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:55:42.619428 | orchestrator | Thursday 09 April 2026 00:54:53 +0000 (0:00:00.273) 0:09:07.644 ******** 2026-04-09 00:55:42.619431 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619435 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619439 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619443 | orchestrator | 2026-04-09 00:55:42.619446 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:55:42.619450 | orchestrator | Thursday 09 April 2026 00:54:54 +0000 (0:00:00.293) 0:09:07.937 ******** 2026-04-09 00:55:42.619454 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619458 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619462 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619466 | orchestrator | 2026-04-09 00:55:42.619469 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:55:42.619473 | orchestrator | Thursday 09 April 2026 00:54:54 +0000 (0:00:00.533) 0:09:08.471 ******** 2026-04-09 00:55:42.619477 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619483 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619488 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619494 | orchestrator | 2026-04-09 00:55:42.619500 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:55:42.619506 | orchestrator | Thursday 09 April 2026 00:54:55 +0000 (0:00:00.382) 0:09:08.853 ******** 2026-04-09 00:55:42.619512 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.619518 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.619524 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.619531 | orchestrator | 2026-04-09 00:55:42.619537 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-09 00:55:42.619544 | orchestrator | Thursday 09 April 2026 00:54:55 +0000 (0:00:00.571) 0:09:09.425 ******** 2026-04-09 00:55:42.619551 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.619555 | orchestrator | 2026-04-09 00:55:42.619559 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 00:55:42.619563 | orchestrator | Thursday 09 April 2026 00:54:56 +0000 (0:00:00.843) 0:09:10.268 ******** 2026-04-09 00:55:42.619567 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.619570 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:55:42.619575 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:55:42.619581 | orchestrator | 2026-04-09 00:55:42.619587 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:55:42.619593 | orchestrator | Thursday 09 April 2026 00:54:58 +0000 (0:00:01.789) 0:09:12.057 ******** 2026-04-09 00:55:42.619599 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:55:42.619605 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:55:42.619611 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.619621 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:55:42.619628 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:55:42.619634 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.619644 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:55:42.619651 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:55:42.619658 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.619664 | orchestrator | 2026-04-09 00:55:42.619671 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-09 00:55:42.619677 | orchestrator | Thursday 09 April 2026 00:54:59 +0000 (0:00:01.299) 0:09:13.357 ******** 2026-04-09 00:55:42.619683 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.619690 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.619696 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.619702 | orchestrator | 2026-04-09 00:55:42.619709 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-09 00:55:42.619715 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:00.557) 0:09:13.915 ******** 2026-04-09 00:55:42.619721 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.619727 | orchestrator | 2026-04-09 00:55:42.619734 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-09 00:55:42.619740 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:00.522) 0:09:14.438 ******** 2026-04-09 00:55:42.619746 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.619754 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.619760 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.619766 | orchestrator | 2026-04-09 00:55:42.619773 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-09 00:55:42.619777 | orchestrator | Thursday 09 April 2026 00:55:01 +0000 (0:00:00.788) 0:09:15.226 ******** 2026-04-09 00:55:42.619781 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.619785 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:55:42.619789 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.619793 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:55:42.619800 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.619806 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:55:42.619812 | orchestrator | 2026-04-09 00:55:42.619817 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 00:55:42.619823 | orchestrator | Thursday 09 April 2026 00:55:05 +0000 (0:00:04.438) 0:09:19.664 ******** 2026-04-09 00:55:42.619829 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.619835 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:55:42.619842 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.619848 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:55:42.619858 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:55:42.619864 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:55:42.619880 | orchestrator | 2026-04-09 00:55:42.619887 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:55:42.619893 | orchestrator | Thursday 09 April 2026 00:55:07 +0000 (0:00:01.954) 0:09:21.619 ******** 2026-04-09 00:55:42.619904 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:55:42.619911 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.619916 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:55:42.619922 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.619929 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:55:42.619935 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.619941 | orchestrator | 2026-04-09 00:55:42.619947 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-09 00:55:42.619953 | orchestrator | Thursday 09 April 2026 00:55:09 +0000 (0:00:01.311) 0:09:22.931 ******** 2026-04-09 00:55:42.619959 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-09 00:55:42.619965 | orchestrator | 2026-04-09 00:55:42.619972 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-09 00:55:42.619978 | orchestrator | Thursday 09 April 2026 00:55:09 +0000 (0:00:00.186) 0:09:23.118 ******** 2026-04-09 00:55:42.619984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.619991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620020 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.620027 | orchestrator | 2026-04-09 00:55:42.620033 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-09 00:55:42.620039 | orchestrator | Thursday 09 April 2026 00:55:09 +0000 (0:00:00.556) 0:09:23.675 ******** 2026-04-09 00:55:42.620046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:55:42.620077 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.620083 | orchestrator | 2026-04-09 00:55:42.620090 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-09 00:55:42.620096 | orchestrator | Thursday 09 April 2026 00:55:10 +0000 (0:00:00.693) 0:09:24.369 ******** 2026-04-09 00:55:42.620103 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:55:42.620109 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:55:42.620116 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:55:42.620122 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:55:42.620132 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:55:42.620138 | orchestrator | 2026-04-09 00:55:42.620144 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-09 00:55:42.620150 | orchestrator | Thursday 09 April 2026 00:55:31 +0000 (0:00:20.468) 0:09:44.837 ******** 2026-04-09 00:55:42.620156 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.620162 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.620169 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.620175 | orchestrator | 2026-04-09 00:55:42.620181 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-09 00:55:42.620188 | orchestrator | Thursday 09 April 2026 00:55:31 +0000 (0:00:00.555) 0:09:45.393 ******** 2026-04-09 00:55:42.620194 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.620203 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.620209 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.620216 | orchestrator | 2026-04-09 00:55:42.620222 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-09 00:55:42.620229 | orchestrator | Thursday 09 April 2026 00:55:31 +0000 (0:00:00.300) 0:09:45.694 ******** 2026-04-09 00:55:42.620235 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.620241 | orchestrator | 2026-04-09 00:55:42.620247 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-09 00:55:42.620254 | orchestrator | Thursday 09 April 2026 00:55:32 +0000 (0:00:00.519) 0:09:46.214 ******** 2026-04-09 00:55:42.620260 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.620265 | orchestrator | 2026-04-09 00:55:42.620271 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-09 00:55:42.620278 | orchestrator | Thursday 09 April 2026 00:55:33 +0000 (0:00:00.748) 0:09:46.962 ******** 2026-04-09 00:55:42.620284 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.620290 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.620296 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.620302 | orchestrator | 2026-04-09 00:55:42.620309 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-09 00:55:42.620315 | orchestrator | Thursday 09 April 2026 00:55:34 +0000 (0:00:01.095) 0:09:48.057 ******** 2026-04-09 00:55:42.620321 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.620327 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.620334 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.620340 | orchestrator | 2026-04-09 00:55:42.620347 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-09 00:55:42.620356 | orchestrator | Thursday 09 April 2026 00:55:35 +0000 (0:00:01.010) 0:09:49.068 ******** 2026-04-09 00:55:42.620363 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:55:42.620369 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:55:42.620376 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:55:42.620382 | orchestrator | 2026-04-09 00:55:42.620389 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-09 00:55:42.620394 | orchestrator | Thursday 09 April 2026 00:55:37 +0000 (0:00:01.949) 0:09:51.018 ******** 2026-04-09 00:55:42.620401 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.620407 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.620413 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:55:42.620423 | orchestrator | 2026-04-09 00:55:42.620429 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:55:42.620436 | orchestrator | Thursday 09 April 2026 00:55:39 +0000 (0:00:02.262) 0:09:53.281 ******** 2026-04-09 00:55:42.620442 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.620448 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.620454 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.620461 | orchestrator | 2026-04-09 00:55:42.620467 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 00:55:42.620474 | orchestrator | Thursday 09 April 2026 00:55:39 +0000 (0:00:00.438) 0:09:53.719 ******** 2026-04-09 00:55:42.620480 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:55:42.620486 | orchestrator | 2026-04-09 00:55:42.620493 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 00:55:42.620498 | orchestrator | Thursday 09 April 2026 00:55:40 +0000 (0:00:00.468) 0:09:54.188 ******** 2026-04-09 00:55:42.620504 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.620510 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.620516 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.620522 | orchestrator | 2026-04-09 00:55:42.620529 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 00:55:42.620535 | orchestrator | Thursday 09 April 2026 00:55:40 +0000 (0:00:00.293) 0:09:54.482 ******** 2026-04-09 00:55:42.620541 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.620547 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:55:42.620553 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:55:42.620559 | orchestrator | 2026-04-09 00:55:42.620565 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 00:55:42.620571 | orchestrator | Thursday 09 April 2026 00:55:41 +0000 (0:00:00.459) 0:09:54.941 ******** 2026-04-09 00:55:42.620578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:55:42.620584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:55:42.620590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:55:42.620596 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:55:42.620602 | orchestrator | 2026-04-09 00:55:42.620608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 00:55:42.620614 | orchestrator | Thursday 09 April 2026 00:55:41 +0000 (0:00:00.556) 0:09:55.497 ******** 2026-04-09 00:55:42.620620 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:55:42.620627 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:55:42.620633 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:55:42.620639 | orchestrator | 2026-04-09 00:55:42.620645 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:55:42.620651 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-04-09 00:55:42.620661 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-09 00:55:42.620667 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-09 00:55:42.620673 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-04-09 00:55:42.620679 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-09 00:55:42.620686 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-09 00:55:42.620697 | orchestrator | 2026-04-09 00:55:42.620703 | orchestrator | 2026-04-09 00:55:42.620710 | orchestrator | 2026-04-09 00:55:42.620716 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:55:42.620722 | orchestrator | Thursday 09 April 2026 00:55:41 +0000 (0:00:00.203) 0:09:55.700 ******** 2026-04-09 00:55:42.620728 | orchestrator | =============================================================================== 2026-04-09 00:55:42.620734 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 80.21s 2026-04-09 00:55:42.620741 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 36.96s 2026-04-09 00:55:42.620747 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 20.47s 2026-04-09 00:55:42.620753 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 12.94s 2026-04-09 00:55:42.620764 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.70s 2026-04-09 00:55:42.620770 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.78s 2026-04-09 00:55:42.620775 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.78s 2026-04-09 00:55:42.620781 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.79s 2026-04-09 00:55:42.620788 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.34s 2026-04-09 00:55:42.620794 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.14s 2026-04-09 00:55:42.620800 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 5.77s 2026-04-09 00:55:42.620806 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.89s 2026-04-09 00:55:42.620812 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.46s 2026-04-09 00:55:42.620817 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.44s 2026-04-09 00:55:42.620823 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.98s 2026-04-09 00:55:42.620829 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.84s 2026-04-09 00:55:42.620835 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.17s 2026-04-09 00:55:42.620842 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.15s 2026-04-09 00:55:42.620848 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.14s 2026-04-09 00:55:42.620854 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.07s 2026-04-09 00:55:42.620860 | orchestrator | 2026-04-09 00:55:42 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:42.620867 | orchestrator | 2026-04-09 00:55:42 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:42.620902 | orchestrator | 2026-04-09 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:45.661594 | orchestrator | 2026-04-09 00:55:45 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:45.663228 | orchestrator | 2026-04-09 00:55:45 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:45.664981 | orchestrator | 2026-04-09 00:55:45 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:55:45.665129 | orchestrator | 2026-04-09 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:48.714232 | orchestrator | 2026-04-09 00:55:48 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:48.718064 | orchestrator | 2026-04-09 00:55:48 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:48.719488 | orchestrator | 2026-04-09 00:55:48 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:55:48.719551 | orchestrator | 2026-04-09 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:51.772545 | orchestrator | 2026-04-09 00:55:51 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:51.776113 | orchestrator | 2026-04-09 00:55:51 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:51.777633 | orchestrator | 2026-04-09 00:55:51 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:55:51.777661 | orchestrator | 2026-04-09 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:54.818382 | orchestrator | 2026-04-09 00:55:54 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:54.820004 | orchestrator | 2026-04-09 00:55:54 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:54.822134 | orchestrator | 2026-04-09 00:55:54 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:55:54.822514 | orchestrator | 2026-04-09 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:57.860824 | orchestrator | 2026-04-09 00:55:57 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:55:57.862133 | orchestrator | 2026-04-09 00:55:57 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:55:57.865480 | orchestrator | 2026-04-09 00:55:57 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:55:57.866140 | orchestrator | 2026-04-09 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:00.906287 | orchestrator | 2026-04-09 00:56:00 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:00.908129 | orchestrator | 2026-04-09 00:56:00 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:00.909983 | orchestrator | 2026-04-09 00:56:00 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:00.912217 | orchestrator | 2026-04-09 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:03.974619 | orchestrator | 2026-04-09 00:56:03 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:03.975969 | orchestrator | 2026-04-09 00:56:03 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:03.976972 | orchestrator | 2026-04-09 00:56:03 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:03.977177 | orchestrator | 2026-04-09 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:07.035113 | orchestrator | 2026-04-09 00:56:07 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:07.036006 | orchestrator | 2026-04-09 00:56:07 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:07.037637 | orchestrator | 2026-04-09 00:56:07 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:07.037672 | orchestrator | 2026-04-09 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:10.101601 | orchestrator | 2026-04-09 00:56:10 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:10.103269 | orchestrator | 2026-04-09 00:56:10 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:10.104564 | orchestrator | 2026-04-09 00:56:10 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:10.104604 | orchestrator | 2026-04-09 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:13.143450 | orchestrator | 2026-04-09 00:56:13 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:13.146572 | orchestrator | 2026-04-09 00:56:13 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:13.148901 | orchestrator | 2026-04-09 00:56:13 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:13.149399 | orchestrator | 2026-04-09 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:16.196004 | orchestrator | 2026-04-09 00:56:16 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:16.197676 | orchestrator | 2026-04-09 00:56:16 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:16.200246 | orchestrator | 2026-04-09 00:56:16 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:16.200536 | orchestrator | 2026-04-09 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:19.253005 | orchestrator | 2026-04-09 00:56:19 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:19.254542 | orchestrator | 2026-04-09 00:56:19 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:19.256444 | orchestrator | 2026-04-09 00:56:19 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:19.256481 | orchestrator | 2026-04-09 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:22.306395 | orchestrator | 2026-04-09 00:56:22 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:22.308779 | orchestrator | 2026-04-09 00:56:22 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:22.312501 | orchestrator | 2026-04-09 00:56:22 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:22.312591 | orchestrator | 2026-04-09 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:25.359912 | orchestrator | 2026-04-09 00:56:25 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:25.361848 | orchestrator | 2026-04-09 00:56:25 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:25.364106 | orchestrator | 2026-04-09 00:56:25 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:25.364160 | orchestrator | 2026-04-09 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:28.407063 | orchestrator | 2026-04-09 00:56:28 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:28.409270 | orchestrator | 2026-04-09 00:56:28 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:28.411292 | orchestrator | 2026-04-09 00:56:28 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:28.411342 | orchestrator | 2026-04-09 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:31.453570 | orchestrator | 2026-04-09 00:56:31 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:31.456481 | orchestrator | 2026-04-09 00:56:31 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:31.458785 | orchestrator | 2026-04-09 00:56:31 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:31.458844 | orchestrator | 2026-04-09 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:34.503158 | orchestrator | 2026-04-09 00:56:34 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:34.505455 | orchestrator | 2026-04-09 00:56:34 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:34.508133 | orchestrator | 2026-04-09 00:56:34 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:34.508201 | orchestrator | 2026-04-09 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:37.554064 | orchestrator | 2026-04-09 00:56:37 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:37.555998 | orchestrator | 2026-04-09 00:56:37 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:37.557390 | orchestrator | 2026-04-09 00:56:37 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:37.557439 | orchestrator | 2026-04-09 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:40.608158 | orchestrator | 2026-04-09 00:56:40 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:40.611125 | orchestrator | 2026-04-09 00:56:40 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:40.613225 | orchestrator | 2026-04-09 00:56:40 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:40.613285 | orchestrator | 2026-04-09 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:43.653907 | orchestrator | 2026-04-09 00:56:43 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:43.655493 | orchestrator | 2026-04-09 00:56:43 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:43.657162 | orchestrator | 2026-04-09 00:56:43 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:43.657206 | orchestrator | 2026-04-09 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:46.704439 | orchestrator | 2026-04-09 00:56:46 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:46.706513 | orchestrator | 2026-04-09 00:56:46 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:46.708584 | orchestrator | 2026-04-09 00:56:46 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:46.708629 | orchestrator | 2026-04-09 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:49.759232 | orchestrator | 2026-04-09 00:56:49 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:49.762965 | orchestrator | 2026-04-09 00:56:49 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:49.765491 | orchestrator | 2026-04-09 00:56:49 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:49.765634 | orchestrator | 2026-04-09 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:52.818385 | orchestrator | 2026-04-09 00:56:52 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:52.820058 | orchestrator | 2026-04-09 00:56:52 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:52.821291 | orchestrator | 2026-04-09 00:56:52 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:52.821463 | orchestrator | 2026-04-09 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:55.866736 | orchestrator | 2026-04-09 00:56:55 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:55.867946 | orchestrator | 2026-04-09 00:56:55 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:55.869086 | orchestrator | 2026-04-09 00:56:55 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:55.869120 | orchestrator | 2026-04-09 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:58.908909 | orchestrator | 2026-04-09 00:56:58 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:56:58.910706 | orchestrator | 2026-04-09 00:56:58 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:56:58.912561 | orchestrator | 2026-04-09 00:56:58 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:56:58.912609 | orchestrator | 2026-04-09 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:01.956688 | orchestrator | 2026-04-09 00:57:01 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:01.958087 | orchestrator | 2026-04-09 00:57:01 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:01.959754 | orchestrator | 2026-04-09 00:57:01 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:01.959842 | orchestrator | 2026-04-09 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:05.010286 | orchestrator | 2026-04-09 00:57:05 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:05.013288 | orchestrator | 2026-04-09 00:57:05 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:05.017338 | orchestrator | 2026-04-09 00:57:05 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:05.017418 | orchestrator | 2026-04-09 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:08.061709 | orchestrator | 2026-04-09 00:57:08 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:08.064116 | orchestrator | 2026-04-09 00:57:08 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:08.067557 | orchestrator | 2026-04-09 00:57:08 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:08.067825 | orchestrator | 2026-04-09 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:11.118468 | orchestrator | 2026-04-09 00:57:11 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:11.119321 | orchestrator | 2026-04-09 00:57:11 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:11.123161 | orchestrator | 2026-04-09 00:57:11 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:11.123205 | orchestrator | 2026-04-09 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:14.161219 | orchestrator | 2026-04-09 00:57:14 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:14.161300 | orchestrator | 2026-04-09 00:57:14 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:14.163235 | orchestrator | 2026-04-09 00:57:14 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:14.163271 | orchestrator | 2026-04-09 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:17.211526 | orchestrator | 2026-04-09 00:57:17 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:17.213287 | orchestrator | 2026-04-09 00:57:17 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:17.215796 | orchestrator | 2026-04-09 00:57:17 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:17.215865 | orchestrator | 2026-04-09 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:20.263053 | orchestrator | 2026-04-09 00:57:20 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:20.264844 | orchestrator | 2026-04-09 00:57:20 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:20.266663 | orchestrator | 2026-04-09 00:57:20 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:20.266738 | orchestrator | 2026-04-09 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:23.314242 | orchestrator | 2026-04-09 00:57:23 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:23.316222 | orchestrator | 2026-04-09 00:57:23 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state STARTED 2026-04-09 00:57:23.318264 | orchestrator | 2026-04-09 00:57:23 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:23.318307 | orchestrator | 2026-04-09 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:26.363472 | orchestrator | 2026-04-09 00:57:26 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:26.365354 | orchestrator | 2026-04-09 00:57:26 | INFO  | Task bf08eb17-f4d6-4431-a206-44690977a388 is in state SUCCESS 2026-04-09 00:57:26.366679 | orchestrator | 2026-04-09 00:57:26.366711 | orchestrator | 2026-04-09 00:57:26.366717 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:57:26.366723 | orchestrator | 2026-04-09 00:57:26.366728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:57:26.366733 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.336) 0:00:00.336 ******** 2026-04-09 00:57:26.366740 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:26.366777 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:26.366783 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:26.366789 | orchestrator | 2026-04-09 00:57:26.366795 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:57:26.366801 | orchestrator | Thursday 09 April 2026 00:54:46 +0000 (0:00:00.269) 0:00:00.605 ******** 2026-04-09 00:57:26.366808 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-09 00:57:26.366814 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-09 00:57:26.366821 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-09 00:57:26.366827 | orchestrator | 2026-04-09 00:57:26.366997 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-09 00:57:26.367008 | orchestrator | 2026-04-09 00:57:26.367014 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:57:26.367020 | orchestrator | Thursday 09 April 2026 00:54:46 +0000 (0:00:00.274) 0:00:00.879 ******** 2026-04-09 00:57:26.367025 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:57:26.367029 | orchestrator | 2026-04-09 00:57:26.367033 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-09 00:57:26.367038 | orchestrator | Thursday 09 April 2026 00:54:47 +0000 (0:00:00.590) 0:00:01.470 ******** 2026-04-09 00:57:26.367042 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:57:26.367046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:57:26.367050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:57:26.367053 | orchestrator | 2026-04-09 00:57:26.367057 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-09 00:57:26.367076 | orchestrator | Thursday 09 April 2026 00:54:47 +0000 (0:00:00.970) 0:00:02.441 ******** 2026-04-09 00:57:26.367092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367139 | orchestrator | 2026-04-09 00:57:26.367143 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:57:26.367147 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:01.358) 0:00:03.799 ******** 2026-04-09 00:57:26.367151 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:57:26.367162 | orchestrator | 2026-04-09 00:57:26.367166 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-09 00:57:26.367180 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.523) 0:00:04.322 ******** 2026-04-09 00:57:26.367184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367223 | orchestrator | 2026-04-09 00:57:26.367227 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-09 00:57:26.367231 | orchestrator | Thursday 09 April 2026 00:54:52 +0000 (0:00:02.675) 0:00:06.998 ******** 2026-04-09 00:57:26.367237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367260 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:26.367267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367271 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:26.367279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367283 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:26.367287 | orchestrator | 2026-04-09 00:57:26.367291 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-09 00:57:26.367295 | orchestrator | Thursday 09 April 2026 00:54:53 +0000 (0:00:01.104) 0:00:08.102 ******** 2026-04-09 00:57:26.367299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367318 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:26.367325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367332 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:26.367336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367347 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:26.367351 | orchestrator | 2026-04-09 00:57:26.367355 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-09 00:57:26.367358 | orchestrator | Thursday 09 April 2026 00:54:54 +0000 (0:00:01.079) 0:00:09.182 ******** 2026-04-09 00:57:26.367362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367407 | orchestrator | 2026-04-09 00:57:26.367411 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-09 00:57:26.367414 | orchestrator | Thursday 09 April 2026 00:54:57 +0000 (0:00:02.923) 0:00:12.106 ******** 2026-04-09 00:57:26.367418 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:26.367422 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:26.367426 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:26.367430 | orchestrator | 2026-04-09 00:57:26.367434 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-09 00:57:26.367438 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:03.140) 0:00:15.247 ******** 2026-04-09 00:57:26.367441 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:26.367445 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:26.367449 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:26.367453 | orchestrator | 2026-04-09 00:57:26.367457 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-09 00:57:26.367460 | orchestrator | Thursday 09 April 2026 00:55:02 +0000 (0:00:01.642) 0:00:16.889 ******** 2026-04-09 00:57:26.367467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:57:26.367485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:57:26.367504 | orchestrator | 2026-04-09 00:57:26.367508 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-09 00:57:26.367512 | orchestrator | Thursday 09 April 2026 00:55:04 +0000 (0:00:02.072) 0:00:18.962 ******** 2026-04-09 00:57:26.367516 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:57:26.367520 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:57:26.367524 | orchestrator | } 2026-04-09 00:57:26.367528 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:57:26.367531 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:57:26.367535 | orchestrator | } 2026-04-09 00:57:26.367539 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:57:26.367543 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:57:26.367547 | orchestrator | } 2026-04-09 00:57:26.367551 | orchestrator | 2026-04-09 00:57:26.367554 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:57:26.367561 | orchestrator | Thursday 09 April 2026 00:55:04 +0000 (0:00:00.425) 0:00:19.388 ******** 2026-04-09 00:57:26.367565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367573 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:26.367580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367593 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:26.367598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:57:26.367602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:57:26.367606 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:26.367611 | orchestrator | 2026-04-09 00:57:26.367618 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:57:26.367622 | orchestrator | Thursday 09 April 2026 00:55:05 +0000 (0:00:00.719) 0:00:20.107 ******** 2026-04-09 00:57:26.367627 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:26.367631 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:26.367636 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:26.367640 | orchestrator | 2026-04-09 00:57:26.367644 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:57:26.367652 | orchestrator | Thursday 09 April 2026 00:55:05 +0000 (0:00:00.255) 0:00:20.362 ******** 2026-04-09 00:57:26.367656 | orchestrator | 2026-04-09 00:57:26.367660 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:57:26.367665 | orchestrator | Thursday 09 April 2026 00:55:05 +0000 (0:00:00.059) 0:00:20.422 ******** 2026-04-09 00:57:26.367669 | orchestrator | 2026-04-09 00:57:26.367674 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:57:26.367678 | orchestrator | Thursday 09 April 2026 00:55:06 +0000 (0:00:00.063) 0:00:20.485 ******** 2026-04-09 00:57:26.367682 | orchestrator | 2026-04-09 00:57:26.367687 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-09 00:57:26.367691 | orchestrator | Thursday 09 April 2026 00:55:06 +0000 (0:00:00.170) 0:00:20.656 ******** 2026-04-09 00:57:26.367696 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:26.367700 | orchestrator | 2026-04-09 00:57:26.367705 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-09 00:57:26.367709 | orchestrator | Thursday 09 April 2026 00:55:06 +0000 (0:00:00.189) 0:00:20.845 ******** 2026-04-09 00:57:26.367714 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:26.367718 | orchestrator | 2026-04-09 00:57:26.367722 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-09 00:57:26.367727 | orchestrator | Thursday 09 April 2026 00:55:06 +0000 (0:00:00.164) 0:00:21.010 ******** 2026-04-09 00:57:26.367731 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:26.367736 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:26.367740 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:26.367785 | orchestrator | 2026-04-09 00:57:26.367791 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-09 00:57:26.367798 | orchestrator | Thursday 09 April 2026 00:56:01 +0000 (0:00:54.491) 0:01:15.502 ******** 2026-04-09 00:57:26.367805 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:26.367811 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:26.367817 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:26.367824 | orchestrator | 2026-04-09 00:57:26.367829 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:57:26.367833 | orchestrator | Thursday 09 April 2026 00:57:10 +0000 (0:01:09.868) 0:02:25.370 ******** 2026-04-09 00:57:26.367841 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:57:26.367846 | orchestrator | 2026-04-09 00:57:26.367850 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-09 00:57:26.367855 | orchestrator | Thursday 09 April 2026 00:57:11 +0000 (0:00:00.631) 0:02:26.002 ******** 2026-04-09 00:57:26.367859 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:26.367863 | orchestrator | 2026-04-09 00:57:26.367867 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-09 00:57:26.367872 | orchestrator | Thursday 09 April 2026 00:57:14 +0000 (0:00:02.683) 0:02:28.685 ******** 2026-04-09 00:57:26.367876 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:26.367880 | orchestrator | 2026-04-09 00:57:26.367885 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-09 00:57:26.367889 | orchestrator | Thursday 09 April 2026 00:57:16 +0000 (0:00:02.629) 0:02:31.315 ******** 2026-04-09 00:57:26.367894 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:26.367898 | orchestrator | 2026-04-09 00:57:26.367903 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-09 00:57:26.367907 | orchestrator | Thursday 09 April 2026 00:57:19 +0000 (0:00:02.591) 0:02:33.906 ******** 2026-04-09 00:57:26.367911 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:26.367916 | orchestrator | 2026-04-09 00:57:26.367920 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-09 00:57:26.367924 | orchestrator | Thursday 09 April 2026 00:57:22 +0000 (0:00:02.970) 0:02:36.876 ******** 2026-04-09 00:57:26.367929 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:26.367937 | orchestrator | 2026-04-09 00:57:26.367941 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:57:26.367947 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 00:57:26.367952 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:57:26.367956 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:57:26.367961 | orchestrator | 2026-04-09 00:57:26.367965 | orchestrator | 2026-04-09 00:57:26.367969 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:57:26.367973 | orchestrator | Thursday 09 April 2026 00:57:25 +0000 (0:00:02.801) 0:02:39.678 ******** 2026-04-09 00:57:26.367978 | orchestrator | =============================================================================== 2026-04-09 00:57:26.367984 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 69.87s 2026-04-09 00:57:26.367990 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.49s 2026-04-09 00:57:26.367996 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.14s 2026-04-09 00:57:26.368002 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.97s 2026-04-09 00:57:26.368011 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.92s 2026-04-09 00:57:26.368017 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.80s 2026-04-09 00:57:26.368023 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.68s 2026-04-09 00:57:26.368029 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.68s 2026-04-09 00:57:26.368035 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.63s 2026-04-09 00:57:26.368041 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.59s 2026-04-09 00:57:26.368047 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.07s 2026-04-09 00:57:26.368053 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.64s 2026-04-09 00:57:26.368059 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.36s 2026-04-09 00:57:26.368064 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.10s 2026-04-09 00:57:26.368070 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.08s 2026-04-09 00:57:26.368077 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.97s 2026-04-09 00:57:26.368082 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.72s 2026-04-09 00:57:26.368174 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.63s 2026-04-09 00:57:26.368179 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-04-09 00:57:26.368183 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-04-09 00:57:26.368187 | orchestrator | 2026-04-09 00:57:26 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:26.368195 | orchestrator | 2026-04-09 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:29.404451 | orchestrator | 2026-04-09 00:57:29 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:29.406163 | orchestrator | 2026-04-09 00:57:29 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:29.406224 | orchestrator | 2026-04-09 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:32.444501 | orchestrator | 2026-04-09 00:57:32 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:32.446219 | orchestrator | 2026-04-09 00:57:32 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:32.446284 | orchestrator | 2026-04-09 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:35.494292 | orchestrator | 2026-04-09 00:57:35 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:35.497019 | orchestrator | 2026-04-09 00:57:35 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:35.497189 | orchestrator | 2026-04-09 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:38.544262 | orchestrator | 2026-04-09 00:57:38 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:38.545305 | orchestrator | 2026-04-09 00:57:38 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state STARTED 2026-04-09 00:57:38.545505 | orchestrator | 2026-04-09 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:41.588455 | orchestrator | 2026-04-09 00:57:41 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:41.592408 | orchestrator | 2026-04-09 00:57:41 | INFO  | Task 101388f9-04f3-4f96-ada5-fdee872920fc is in state SUCCESS 2026-04-09 00:57:41.594513 | orchestrator | 2026-04-09 00:57:41.594586 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:57:41.594597 | orchestrator | 2.16.14 2026-04-09 00:57:41.594605 | orchestrator | 2026-04-09 00:57:41.594689 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-09 00:57:41.594698 | orchestrator | 2026-04-09 00:57:41.594703 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 00:57:41.594710 | orchestrator | Thursday 09 April 2026 00:55:46 +0000 (0:00:00.534) 0:00:00.534 ******** 2026-04-09 00:57:41.594718 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:57:41.594748 | orchestrator | 2026-04-09 00:57:41.594753 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 00:57:41.594758 | orchestrator | Thursday 09 April 2026 00:55:47 +0000 (0:00:00.595) 0:00:01.129 ******** 2026-04-09 00:57:41.594762 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.594766 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.594770 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.594774 | orchestrator | 2026-04-09 00:57:41.594778 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 00:57:41.594824 | orchestrator | Thursday 09 April 2026 00:55:48 +0000 (0:00:00.994) 0:00:02.124 ******** 2026-04-09 00:57:41.594830 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.594834 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.594838 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.594842 | orchestrator | 2026-04-09 00:57:41.594859 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 00:57:41.594863 | orchestrator | Thursday 09 April 2026 00:55:48 +0000 (0:00:00.287) 0:00:02.412 ******** 2026-04-09 00:57:41.594866 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.594870 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.594874 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.594878 | orchestrator | 2026-04-09 00:57:41.594882 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 00:57:41.594885 | orchestrator | Thursday 09 April 2026 00:55:49 +0000 (0:00:00.891) 0:00:03.303 ******** 2026-04-09 00:57:41.594889 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.594893 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.594897 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.594901 | orchestrator | 2026-04-09 00:57:41.594904 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 00:57:41.594908 | orchestrator | Thursday 09 April 2026 00:55:49 +0000 (0:00:00.306) 0:00:03.610 ******** 2026-04-09 00:57:41.595237 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.595253 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.595259 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.595265 | orchestrator | 2026-04-09 00:57:41.595271 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 00:57:41.595277 | orchestrator | Thursday 09 April 2026 00:55:49 +0000 (0:00:00.293) 0:00:03.903 ******** 2026-04-09 00:57:41.595282 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.595286 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.595292 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.595298 | orchestrator | 2026-04-09 00:57:41.595304 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 00:57:41.595310 | orchestrator | Thursday 09 April 2026 00:55:50 +0000 (0:00:00.305) 0:00:04.208 ******** 2026-04-09 00:57:41.595316 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595323 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.595329 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.595335 | orchestrator | 2026-04-09 00:57:41.595341 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 00:57:41.595347 | orchestrator | Thursday 09 April 2026 00:55:50 +0000 (0:00:00.490) 0:00:04.699 ******** 2026-04-09 00:57:41.595353 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.595359 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.595365 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.595371 | orchestrator | 2026-04-09 00:57:41.595377 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 00:57:41.595383 | orchestrator | Thursday 09 April 2026 00:55:51 +0000 (0:00:00.282) 0:00:04.981 ******** 2026-04-09 00:57:41.595389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:57:41.595395 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:57:41.595403 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:57:41.595407 | orchestrator | 2026-04-09 00:57:41.595410 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 00:57:41.595414 | orchestrator | Thursday 09 April 2026 00:55:51 +0000 (0:00:00.587) 0:00:05.569 ******** 2026-04-09 00:57:41.595418 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.595422 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.595425 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.595429 | orchestrator | 2026-04-09 00:57:41.595433 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 00:57:41.595437 | orchestrator | Thursday 09 April 2026 00:55:52 +0000 (0:00:00.387) 0:00:05.956 ******** 2026-04-09 00:57:41.595440 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:57:41.595444 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:57:41.595448 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:57:41.595452 | orchestrator | 2026-04-09 00:57:41.595455 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 00:57:41.595459 | orchestrator | Thursday 09 April 2026 00:55:54 +0000 (0:00:02.681) 0:00:08.638 ******** 2026-04-09 00:57:41.595463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:57:41.595467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:57:41.595471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:57:41.595475 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595479 | orchestrator | 2026-04-09 00:57:41.595511 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 00:57:41.595516 | orchestrator | Thursday 09 April 2026 00:55:55 +0000 (0:00:00.400) 0:00:09.038 ******** 2026-04-09 00:57:41.595532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.595539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.595543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.595547 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595551 | orchestrator | 2026-04-09 00:57:41.595555 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 00:57:41.595568 | orchestrator | Thursday 09 April 2026 00:55:55 +0000 (0:00:00.802) 0:00:09.841 ******** 2026-04-09 00:57:41.595577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.595585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.595592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.595598 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595604 | orchestrator | 2026-04-09 00:57:41.595610 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 00:57:41.595615 | orchestrator | Thursday 09 April 2026 00:55:56 +0000 (0:00:00.146) 0:00:09.987 ******** 2026-04-09 00:57:41.595624 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '16794aa129c0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 00:55:52.853579', 'end': '2026-04-09 00:55:52.884323', 'delta': '0:00:00.030744', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['16794aa129c0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 00:57:41.595633 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6ae79e42e51a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 00:55:53.791030', 'end': '2026-04-09 00:55:53.816660', 'delta': '0:00:00.025630', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6ae79e42e51a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 00:57:41.595667 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'dad3a035a931', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 00:55:54.541538', 'end': '2026-04-09 00:55:54.566448', 'delta': '0:00:00.024910', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['dad3a035a931'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 00:57:41.595672 | orchestrator | 2026-04-09 00:57:41.595676 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 00:57:41.595680 | orchestrator | Thursday 09 April 2026 00:55:56 +0000 (0:00:00.347) 0:00:10.335 ******** 2026-04-09 00:57:41.595684 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.595687 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.595691 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.595695 | orchestrator | 2026-04-09 00:57:41.595702 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 00:57:41.595706 | orchestrator | Thursday 09 April 2026 00:55:56 +0000 (0:00:00.432) 0:00:10.767 ******** 2026-04-09 00:57:41.595710 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-09 00:57:41.595714 | orchestrator | 2026-04-09 00:57:41.595718 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 00:57:41.595722 | orchestrator | Thursday 09 April 2026 00:55:58 +0000 (0:00:01.364) 0:00:12.132 ******** 2026-04-09 00:57:41.595756 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595760 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.595764 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.595768 | orchestrator | 2026-04-09 00:57:41.595772 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 00:57:41.595775 | orchestrator | Thursday 09 April 2026 00:55:58 +0000 (0:00:00.277) 0:00:12.410 ******** 2026-04-09 00:57:41.595779 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595783 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.595787 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.595792 | orchestrator | 2026-04-09 00:57:41.595798 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:57:41.595804 | orchestrator | Thursday 09 April 2026 00:55:58 +0000 (0:00:00.394) 0:00:12.804 ******** 2026-04-09 00:57:41.595810 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595816 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.595822 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.595829 | orchestrator | 2026-04-09 00:57:41.595834 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 00:57:41.595840 | orchestrator | Thursday 09 April 2026 00:55:59 +0000 (0:00:00.441) 0:00:13.245 ******** 2026-04-09 00:57:41.595847 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.595852 | orchestrator | 2026-04-09 00:57:41.595856 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 00:57:41.595860 | orchestrator | Thursday 09 April 2026 00:55:59 +0000 (0:00:00.117) 0:00:13.363 ******** 2026-04-09 00:57:41.595864 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595869 | orchestrator | 2026-04-09 00:57:41.595873 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:57:41.595879 | orchestrator | Thursday 09 April 2026 00:55:59 +0000 (0:00:00.202) 0:00:13.566 ******** 2026-04-09 00:57:41.595885 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595897 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.595903 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.595910 | orchestrator | 2026-04-09 00:57:41.595917 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 00:57:41.595924 | orchestrator | Thursday 09 April 2026 00:55:59 +0000 (0:00:00.260) 0:00:13.826 ******** 2026-04-09 00:57:41.595930 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595937 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.595943 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.595949 | orchestrator | 2026-04-09 00:57:41.595956 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 00:57:41.595963 | orchestrator | Thursday 09 April 2026 00:56:00 +0000 (0:00:00.306) 0:00:14.133 ******** 2026-04-09 00:57:41.595969 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.595975 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.595982 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.595987 | orchestrator | 2026-04-09 00:57:41.595992 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 00:57:41.595996 | orchestrator | Thursday 09 April 2026 00:56:00 +0000 (0:00:00.480) 0:00:14.613 ******** 2026-04-09 00:57:41.596000 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.596004 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.596009 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.596014 | orchestrator | 2026-04-09 00:57:41.596019 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 00:57:41.596023 | orchestrator | Thursday 09 April 2026 00:56:00 +0000 (0:00:00.324) 0:00:14.937 ******** 2026-04-09 00:57:41.596027 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.596032 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.596036 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.596040 | orchestrator | 2026-04-09 00:57:41.596044 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 00:57:41.596049 | orchestrator | Thursday 09 April 2026 00:56:01 +0000 (0:00:00.327) 0:00:15.264 ******** 2026-04-09 00:57:41.596054 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.596058 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.596063 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.596090 | orchestrator | 2026-04-09 00:57:41.596097 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 00:57:41.596104 | orchestrator | Thursday 09 April 2026 00:56:01 +0000 (0:00:00.383) 0:00:15.648 ******** 2026-04-09 00:57:41.596111 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.596117 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.596123 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.596130 | orchestrator | 2026-04-09 00:57:41.596137 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 00:57:41.596143 | orchestrator | Thursday 09 April 2026 00:56:02 +0000 (0:00:00.478) 0:00:16.127 ******** 2026-04-09 00:57:41.596155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f', 'dm-uuid-LVM-nR58myQ6pK7CQaaoaqeUaTr2y04UWbY4rmwX38Fsdxa6f0tdDHKde9pIwH3mBu3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc', 'dm-uuid-LVM-inKPYMNJVzOEcfQ61vGzCEOAGy0y8MwHDdw1TsQPoBrMrQLR2EaxpS4lADMlmMXF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab', 'dm-uuid-LVM-Lo4LcmWfSy7gVLMDXOQe0r6XJWEZ5FSB3EMePpFYvvdguKgOr1hP2cnsNB4diqWS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295', 'dm-uuid-LVM-EC5U4dGvscytX2YiEPz751fODyiM5M72dFyO1tHVtyJ16NzQnwBHL7mR7Apxxh1s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c', 'dm-uuid-LVM-ATeWDLeRt2MqpUMSviKSYYqUAu28iXv2moIkCA8ri2lff0l9G9wTQ20ulwcOt3m7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443', 'dm-uuid-LVM-Ctb2dXGixJo1dOG789QKtkmq0iEDri4EYqcM52u9gZ93MvFmZ4au4J2KtuUbJIHA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b0xHt1-aLY6-yIBz-O5g5-Np9W-VHpg-RipOIZ', 'scsi-0QEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289', 'scsi-SQEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5qfd8o-m5s0-C2o8-KAuj-GJfR-md71-iQQ1hr', 'scsi-0QEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2', 'scsi-SQEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d', 'scsi-SQEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596462 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.596466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:57:41.596485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UdmBAE-npqq-tfss-7hRw-4N8m-BlAM-rR1vGg', 'scsi-0QEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299', 'scsi-SQEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TakQuR-BfQB-UvNn-2mEJ-PIfA-Aqz3-sZqwRb', 'scsi-0QEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb', 'scsi-SQEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFvPvZ-xma3-G6w8-CL9f-zhKx-TjJS-x3zHJD', 'scsi-0QEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2', 'scsi-SQEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965', 'scsi-SQEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2iUzY0-E3Ac-OC4P-7dkU-6l7s-3ZU1-p0bxHR', 'scsi-0QEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f', 'scsi-SQEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596536 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.596540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669', 'scsi-SQEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:57:41.596556 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.596560 | orchestrator | 2026-04-09 00:57:41.596564 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 00:57:41.596568 | orchestrator | Thursday 09 April 2026 00:56:02 +0000 (0:00:00.556) 0:00:16.683 ******** 2026-04-09 00:57:41.596574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f', 'dm-uuid-LVM-nR58myQ6pK7CQaaoaqeUaTr2y04UWbY4rmwX38Fsdxa6f0tdDHKde9pIwH3mBu3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc', 'dm-uuid-LVM-inKPYMNJVzOEcfQ61vGzCEOAGy0y8MwHDdw1TsQPoBrMrQLR2EaxpS4lADMlmMXF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596624 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab', 'dm-uuid-LVM-Lo4LcmWfSy7gVLMDXOQe0r6XJWEZ5FSB3EMePpFYvvdguKgOr1hP2cnsNB4diqWS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a5a911-370b-4f88-b9e4-bb1166596610-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596651 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295', 'dm-uuid-LVM-EC5U4dGvscytX2YiEPz751fODyiM5M72dFyO1tHVtyJ16NzQnwBHL7mR7Apxxh1s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d2293633--4853--52c3--92d9--c83407e5923f-osd--block--d2293633--4853--52c3--92d9--c83407e5923f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-b0xHt1-aLY6-yIBz-O5g5-Np9W-VHpg-RipOIZ', 'scsi-0QEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289', 'scsi-SQEMU_QEMU_HARDDISK_47390fa5-1f85-4c3c-be39-aeec9b514289'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9ee08831--7be2--5055--b7bf--21e225eea3cc-osd--block--9ee08831--7be2--5055--b7bf--21e225eea3cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5qfd8o-m5s0-C2o8-KAuj-GJfR-md71-iQQ1hr', 'scsi-0QEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2', 'scsi-SQEMU_QEMU_HARDDISK_03bd35e9-2f61-41d7-a9bf-42d58136cbb2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d', 'scsi-SQEMU_QEMU_HARDDISK_c3f900a3-fa00-488b-a223-0b2f981ffe7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596686 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.596690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596694 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596716 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596721 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16', 'scsi-SQEMU_QEMU_HARDDISK_a4188790-ce2f-4f5e-b379-255b1854dd65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d534d538--4d4e--5604--9605--85867297f7ab-osd--block--d534d538--4d4e--5604--9605--85867297f7ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UdmBAE-npqq-tfss-7hRw-4N8m-BlAM-rR1vGg', 'scsi-0QEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299', 'scsi-SQEMU_QEMU_HARDDISK_86513dfb-f28c-4b30-a867-1cbb67da9299'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6327354e--b41f--514e--b570--068bfc1f3295-osd--block--6327354e--b41f--514e--b570--068bfc1f3295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TakQuR-BfQB-UvNn-2mEJ-PIfA-Aqz3-sZqwRb', 'scsi-0QEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb', 'scsi-SQEMU_QEMU_HARDDISK_32d367e8-aaa8-48f2-9cf3-723daee201fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c', 'dm-uuid-LVM-ATeWDLeRt2MqpUMSviKSYYqUAu28iXv2moIkCA8ri2lff0l9G9wTQ20ulwcOt3m7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965', 'scsi-SQEMU_QEMU_HARDDISK_aa0ca3b6-df0d-4a4f-a31d-6790cd2a9965'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596797 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443', 'dm-uuid-LVM-Ctb2dXGixJo1dOG789QKtkmq0iEDri4EYqcM52u9gZ93MvFmZ4au4J2KtuUbJIHA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596805 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.596809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596813 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596821 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596843 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16', 'scsi-SQEMU_QEMU_HARDDISK_364cd792-88d4-4656-84ac-42e5adfc1168-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a254e30f--06f2--55f8--8a7e--64e382968b4c-osd--block--a254e30f--06f2--55f8--8a7e--64e382968b4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KFvPvZ-xma3-G6w8-CL9f-zhKx-TjJS-x3zHJD', 'scsi-0QEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2', 'scsi-SQEMU_QEMU_HARDDISK_81032de5-e928-481e-b1b2-e1c42e1209c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596931 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a6a3488f--30e9--5ba3--9724--16c1df88c443-osd--block--a6a3488f--30e9--5ba3--9724--16c1df88c443'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2iUzY0-E3Ac-OC4P-7dkU-6l7s-3ZU1-p0bxHR', 'scsi-0QEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f', 'scsi-SQEMU_QEMU_HARDDISK_5d65a6d0-57b2-4d69-9f6c-8b44337eed1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669', 'scsi-SQEMU_QEMU_HARDDISK_2bf0158d-8270-4c58-8b9a-c2cfdc8e7669'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:57:41.596950 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.596954 | orchestrator | 2026-04-09 00:57:41.596958 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 00:57:41.596962 | orchestrator | Thursday 09 April 2026 00:56:03 +0000 (0:00:00.740) 0:00:17.423 ******** 2026-04-09 00:57:41.596966 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.596970 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.596974 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.596977 | orchestrator | 2026-04-09 00:57:41.596981 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 00:57:41.596985 | orchestrator | Thursday 09 April 2026 00:56:04 +0000 (0:00:00.642) 0:00:18.066 ******** 2026-04-09 00:57:41.596989 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.596993 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.596997 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.597000 | orchestrator | 2026-04-09 00:57:41.597004 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:57:41.597011 | orchestrator | Thursday 09 April 2026 00:56:04 +0000 (0:00:00.462) 0:00:18.529 ******** 2026-04-09 00:57:41.597015 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.597019 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.597023 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.597026 | orchestrator | 2026-04-09 00:57:41.597030 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:57:41.597034 | orchestrator | Thursday 09 April 2026 00:56:05 +0000 (0:00:00.640) 0:00:19.170 ******** 2026-04-09 00:57:41.597038 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597042 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597046 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.597050 | orchestrator | 2026-04-09 00:57:41.597054 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:57:41.597058 | orchestrator | Thursday 09 April 2026 00:56:05 +0000 (0:00:00.459) 0:00:19.629 ******** 2026-04-09 00:57:41.597062 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597066 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597070 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.597080 | orchestrator | 2026-04-09 00:57:41.597086 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:57:41.597091 | orchestrator | Thursday 09 April 2026 00:56:06 +0000 (0:00:00.578) 0:00:20.207 ******** 2026-04-09 00:57:41.597097 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597102 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597108 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.597113 | orchestrator | 2026-04-09 00:57:41.597119 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 00:57:41.597125 | orchestrator | Thursday 09 April 2026 00:56:06 +0000 (0:00:00.496) 0:00:20.704 ******** 2026-04-09 00:57:41.597130 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 00:57:41.597137 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 00:57:41.597143 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 00:57:41.597149 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 00:57:41.597155 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 00:57:41.597160 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 00:57:41.597166 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 00:57:41.597172 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 00:57:41.597177 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 00:57:41.597183 | orchestrator | 2026-04-09 00:57:41.597190 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 00:57:41.597196 | orchestrator | Thursday 09 April 2026 00:56:07 +0000 (0:00:00.886) 0:00:21.590 ******** 2026-04-09 00:57:41.597202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:57:41.597209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:57:41.597216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:57:41.597220 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597224 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:57:41.597228 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:57:41.597232 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:57:41.597236 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:57:41.597243 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:57:41.597247 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:57:41.597251 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.597254 | orchestrator | 2026-04-09 00:57:41.597258 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 00:57:41.597262 | orchestrator | Thursday 09 April 2026 00:56:07 +0000 (0:00:00.359) 0:00:21.950 ******** 2026-04-09 00:57:41.597266 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:57:41.597270 | orchestrator | 2026-04-09 00:57:41.597274 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:57:41.597279 | orchestrator | Thursday 09 April 2026 00:56:09 +0000 (0:00:01.013) 0:00:22.963 ******** 2026-04-09 00:57:41.597287 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597291 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597295 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.597299 | orchestrator | 2026-04-09 00:57:41.597302 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:57:41.597306 | orchestrator | Thursday 09 April 2026 00:56:09 +0000 (0:00:00.395) 0:00:23.359 ******** 2026-04-09 00:57:41.597310 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597314 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597318 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.597326 | orchestrator | 2026-04-09 00:57:41.597330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:57:41.597334 | orchestrator | Thursday 09 April 2026 00:56:09 +0000 (0:00:00.324) 0:00:23.684 ******** 2026-04-09 00:57:41.597338 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597341 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597345 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:57:41.597349 | orchestrator | 2026-04-09 00:57:41.597353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:57:41.597357 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:00.296) 0:00:23.980 ******** 2026-04-09 00:57:41.597360 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.597364 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.597368 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.597372 | orchestrator | 2026-04-09 00:57:41.597376 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:57:41.597383 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:00.638) 0:00:24.619 ******** 2026-04-09 00:57:41.597387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:57:41.597390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:57:41.597394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:57:41.597398 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597402 | orchestrator | 2026-04-09 00:57:41.597407 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:57:41.597413 | orchestrator | Thursday 09 April 2026 00:56:11 +0000 (0:00:00.363) 0:00:24.983 ******** 2026-04-09 00:57:41.597418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:57:41.597423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:57:41.597434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:57:41.597441 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597447 | orchestrator | 2026-04-09 00:57:41.597454 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:57:41.597460 | orchestrator | Thursday 09 April 2026 00:56:11 +0000 (0:00:00.342) 0:00:25.325 ******** 2026-04-09 00:57:41.597466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:57:41.597471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:57:41.597477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:57:41.597482 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597488 | orchestrator | 2026-04-09 00:57:41.597493 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:57:41.597499 | orchestrator | Thursday 09 April 2026 00:56:11 +0000 (0:00:00.505) 0:00:25.830 ******** 2026-04-09 00:57:41.597505 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:57:41.597510 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:57:41.597516 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:57:41.597522 | orchestrator | 2026-04-09 00:57:41.597528 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:57:41.597533 | orchestrator | Thursday 09 April 2026 00:56:12 +0000 (0:00:00.314) 0:00:26.145 ******** 2026-04-09 00:57:41.597540 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:57:41.597546 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:57:41.597552 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:57:41.597558 | orchestrator | 2026-04-09 00:57:41.597566 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 00:57:41.597570 | orchestrator | Thursday 09 April 2026 00:56:12 +0000 (0:00:00.464) 0:00:26.609 ******** 2026-04-09 00:57:41.597574 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:57:41.597578 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:57:41.597587 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:57:41.597591 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:57:41.597595 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:57:41.597599 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:57:41.597603 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:57:41.597607 | orchestrator | 2026-04-09 00:57:41.597611 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 00:57:41.597614 | orchestrator | Thursday 09 April 2026 00:56:13 +0000 (0:00:00.946) 0:00:27.556 ******** 2026-04-09 00:57:41.597618 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:57:41.597622 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:57:41.597626 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:57:41.597630 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:57:41.597633 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:57:41.597637 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:57:41.597645 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:57:41.597649 | orchestrator | 2026-04-09 00:57:41.597653 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-09 00:57:41.597657 | orchestrator | Thursday 09 April 2026 00:56:15 +0000 (0:00:01.830) 0:00:29.386 ******** 2026-04-09 00:57:41.597661 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:57:41.597664 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:57:41.597668 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-09 00:57:41.597672 | orchestrator | 2026-04-09 00:57:41.597676 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-09 00:57:41.597680 | orchestrator | Thursday 09 April 2026 00:56:15 +0000 (0:00:00.355) 0:00:29.742 ******** 2026-04-09 00:57:41.597685 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:57:41.597694 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:57:41.597698 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:57:41.597702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:57:41.597706 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:57:41.597710 | orchestrator | 2026-04-09 00:57:41.597714 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-09 00:57:41.597721 | orchestrator | Thursday 09 April 2026 00:56:53 +0000 (0:00:37.980) 0:01:07.723 ******** 2026-04-09 00:57:41.597843 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597851 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597854 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597858 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597862 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597866 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597870 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-09 00:57:41.597873 | orchestrator | 2026-04-09 00:57:41.597877 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-09 00:57:41.597881 | orchestrator | Thursday 09 April 2026 00:57:12 +0000 (0:00:19.180) 0:01:26.903 ******** 2026-04-09 00:57:41.597885 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597888 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597892 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597896 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597900 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597903 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597907 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:57:41.597911 | orchestrator | 2026-04-09 00:57:41.597915 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-09 00:57:41.597919 | orchestrator | Thursday 09 April 2026 00:57:23 +0000 (0:00:10.062) 0:01:36.965 ******** 2026-04-09 00:57:41.597923 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597927 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:57:41.597930 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:57:41.597934 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597938 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:57:41.597947 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:57:41.597951 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597955 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:57:41.597959 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:57:41.597962 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597966 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:57:41.597970 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:57:41.597974 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597978 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:57:41.597982 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:57:41.597986 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:57:41.597989 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:57:41.598003 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:57:41.598008 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-09 00:57:41.598012 | orchestrator | 2026-04-09 00:57:41.598061 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:57:41.598066 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 00:57:41.598072 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 00:57:41.598076 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 00:57:41.598081 | orchestrator | 2026-04-09 00:57:41.598087 | orchestrator | 2026-04-09 00:57:41.598093 | orchestrator | 2026-04-09 00:57:41.598099 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:57:41.598107 | orchestrator | Thursday 09 April 2026 00:57:41 +0000 (0:00:18.048) 0:01:55.014 ******** 2026-04-09 00:57:41.598116 | orchestrator | =============================================================================== 2026-04-09 00:57:41.598123 | orchestrator | create openstack pool(s) ----------------------------------------------- 37.98s 2026-04-09 00:57:41.598128 | orchestrator | generate keys ---------------------------------------------------------- 19.18s 2026-04-09 00:57:41.598134 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.05s 2026-04-09 00:57:41.598140 | orchestrator | get keys from monitors ------------------------------------------------- 10.06s 2026-04-09 00:57:41.598145 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.68s 2026-04-09 00:57:41.598151 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.83s 2026-04-09 00:57:41.598157 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.36s 2026-04-09 00:57:41.598163 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 1.01s 2026-04-09 00:57:41.598168 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.99s 2026-04-09 00:57:41.598173 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2026-04-09 00:57:41.598179 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2026-04-09 00:57:41.598185 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.89s 2026-04-09 00:57:41.598191 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2026-04-09 00:57:41.598196 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.74s 2026-04-09 00:57:41.598203 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2026-04-09 00:57:41.598208 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-04-09 00:57:41.598214 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.64s 2026-04-09 00:57:41.598220 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2026-04-09 00:57:41.598226 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2026-04-09 00:57:41.598232 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.58s 2026-04-09 00:57:41.598239 | orchestrator | 2026-04-09 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:44.647962 | orchestrator | 2026-04-09 00:57:44 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:44.649672 | orchestrator | 2026-04-09 00:57:44 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:57:44.649766 | orchestrator | 2026-04-09 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:47.694745 | orchestrator | 2026-04-09 00:57:47 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:47.695183 | orchestrator | 2026-04-09 00:57:47 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:57:47.695215 | orchestrator | 2026-04-09 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:50.740448 | orchestrator | 2026-04-09 00:57:50 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:50.741929 | orchestrator | 2026-04-09 00:57:50 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:57:50.743066 | orchestrator | 2026-04-09 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:53.779855 | orchestrator | 2026-04-09 00:57:53 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:53.779935 | orchestrator | 2026-04-09 00:57:53 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:57:53.779944 | orchestrator | 2026-04-09 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:56.838154 | orchestrator | 2026-04-09 00:57:56 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state STARTED 2026-04-09 00:57:56.839308 | orchestrator | 2026-04-09 00:57:56 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:57:56.839509 | orchestrator | 2026-04-09 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:59.884464 | orchestrator | 2026-04-09 00:57:59 | INFO  | Task eaeed454-cce3-423a-85ee-96df0f0f6575 is in state SUCCESS 2026-04-09 00:57:59.885510 | orchestrator | 2026-04-09 00:57:59.885546 | orchestrator | 2026-04-09 00:57:59.885572 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-09 00:57:59.885577 | orchestrator | 2026-04-09 00:57:59.885581 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-09 00:57:59.885586 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.116) 0:00:00.116 ******** 2026-04-09 00:57:59.885590 | orchestrator | ok: [localhost] => { 2026-04-09 00:57:59.885616 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-09 00:57:59.885621 | orchestrator | } 2026-04-09 00:57:59.885625 | orchestrator | 2026-04-09 00:57:59.885629 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-09 00:57:59.885633 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.056) 0:00:00.173 ******** 2026-04-09 00:57:59.885638 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-09 00:57:59.885643 | orchestrator | ...ignoring 2026-04-09 00:57:59.885647 | orchestrator | 2026-04-09 00:57:59.885651 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-09 00:57:59.885655 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:02.941) 0:00:03.114 ******** 2026-04-09 00:57:59.885686 | orchestrator | skipping: [localhost] 2026-04-09 00:57:59.885692 | orchestrator | 2026-04-09 00:57:59.885699 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-09 00:57:59.885746 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:00.051) 0:00:03.166 ******** 2026-04-09 00:57:59.885752 | orchestrator | ok: [localhost] 2026-04-09 00:57:59.885758 | orchestrator | 2026-04-09 00:57:59.885764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:57:59.885770 | orchestrator | 2026-04-09 00:57:59.885776 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:57:59.885782 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.214) 0:00:03.381 ******** 2026-04-09 00:57:59.885788 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.885794 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.885823 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.885829 | orchestrator | 2026-04-09 00:57:59.885835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:57:59.885841 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.290) 0:00:03.672 ******** 2026-04-09 00:57:59.885847 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 00:57:59.885853 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 00:57:59.885859 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 00:57:59.885865 | orchestrator | 2026-04-09 00:57:59.885872 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 00:57:59.885879 | orchestrator | 2026-04-09 00:57:59.885885 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 00:57:59.885891 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.435) 0:00:04.108 ******** 2026-04-09 00:57:59.885897 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:57:59.885901 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 00:57:59.885905 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 00:57:59.885909 | orchestrator | 2026-04-09 00:57:59.885913 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:57:59.885917 | orchestrator | Thursday 09 April 2026 00:54:50 +0000 (0:00:00.406) 0:00:04.514 ******** 2026-04-09 00:57:59.885921 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:57:59.885926 | orchestrator | 2026-04-09 00:57:59.885929 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-09 00:57:59.885933 | orchestrator | Thursday 09 April 2026 00:54:50 +0000 (0:00:00.723) 0:00:05.238 ******** 2026-04-09 00:57:59.885962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.885988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886007 | orchestrator | 2026-04-09 00:57:59.886047 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-09 00:57:59.886350 | orchestrator | Thursday 09 April 2026 00:54:54 +0000 (0:00:03.701) 0:00:08.940 ******** 2026-04-09 00:57:59.886357 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886364 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.886371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886377 | orchestrator | 2026-04-09 00:57:59.886382 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-09 00:57:59.886389 | orchestrator | Thursday 09 April 2026 00:54:55 +0000 (0:00:00.986) 0:00:09.926 ******** 2026-04-09 00:57:59.886394 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886400 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886415 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.886421 | orchestrator | 2026-04-09 00:57:59.886427 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-09 00:57:59.886433 | orchestrator | Thursday 09 April 2026 00:54:57 +0000 (0:00:01.560) 0:00:11.487 ******** 2026-04-09 00:57:59.886441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886498 | orchestrator | 2026-04-09 00:57:59.886504 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-09 00:57:59.886510 | orchestrator | Thursday 09 April 2026 00:55:01 +0000 (0:00:04.075) 0:00:15.562 ******** 2026-04-09 00:57:59.886516 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886521 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886527 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.886533 | orchestrator | 2026-04-09 00:57:59.886539 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-09 00:57:59.886545 | orchestrator | Thursday 09 April 2026 00:55:02 +0000 (0:00:01.239) 0:00:16.801 ******** 2026-04-09 00:57:59.886551 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.886557 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:59.886563 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:59.886569 | orchestrator | 2026-04-09 00:57:59.886589 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:57:59.886594 | orchestrator | Thursday 09 April 2026 00:55:05 +0000 (0:00:03.246) 0:00:20.048 ******** 2026-04-09 00:57:59.886598 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:57:59.886602 | orchestrator | 2026-04-09 00:57:59.886606 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 00:57:59.886610 | orchestrator | Thursday 09 April 2026 00:55:06 +0000 (0:00:00.531) 0:00:20.579 ******** 2026-04-09 00:57:59.886625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886634 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.886638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886643 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886662 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886666 | orchestrator | 2026-04-09 00:57:59.886670 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 00:57:59.886674 | orchestrator | Thursday 09 April 2026 00:55:08 +0000 (0:00:02.377) 0:00:22.957 ******** 2026-04-09 00:57:59.886678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886682 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.886691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886699 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886733 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886738 | orchestrator | 2026-04-09 00:57:59.886741 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 00:57:59.886745 | orchestrator | Thursday 09 April 2026 00:55:10 +0000 (0:00:02.397) 0:00:25.354 ******** 2026-04-09 00:57:59.886756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886764 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886777 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886791 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.886795 | orchestrator | 2026-04-09 00:57:59.886799 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-09 00:57:59.886805 | orchestrator | Thursday 09 April 2026 00:55:13 +0000 (0:00:02.225) 0:00:27.580 ******** 2026-04-09 00:57:59.886814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:57:59.886837 | orchestrator | 2026-04-09 00:57:59.886841 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-09 00:57:59.886844 | orchestrator | Thursday 09 April 2026 00:55:16 +0000 (0:00:03.029) 0:00:30.610 ******** 2026-04-09 00:57:59.886848 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:57:59.886852 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:57:59.886856 | orchestrator | } 2026-04-09 00:57:59.886860 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:57:59.886864 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:57:59.886867 | orchestrator | } 2026-04-09 00:57:59.886871 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:57:59.886875 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:57:59.886879 | orchestrator | } 2026-04-09 00:57:59.886883 | orchestrator | 2026-04-09 00:57:59.886886 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:57:59.886890 | orchestrator | Thursday 09 April 2026 00:55:16 +0000 (0:00:00.436) 0:00:31.047 ******** 2026-04-09 00:57:59.886894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886901 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886915 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.886927 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.886931 | orchestrator | 2026-04-09 00:57:59.886935 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-09 00:57:59.886938 | orchestrator | Thursday 09 April 2026 00:55:19 +0000 (0:00:02.625) 0:00:33.672 ******** 2026-04-09 00:57:59.886942 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.886946 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886950 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886954 | orchestrator | 2026-04-09 00:57:59.886957 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-09 00:57:59.886961 | orchestrator | Thursday 09 April 2026 00:55:20 +0000 (0:00:00.984) 0:00:34.656 ******** 2026-04-09 00:57:59.886965 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.886969 | orchestrator | 2026-04-09 00:57:59.886974 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-09 00:57:59.886979 | orchestrator | Thursday 09 April 2026 00:55:20 +0000 (0:00:00.174) 0:00:34.831 ******** 2026-04-09 00:57:59.886985 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.886990 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.886994 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.886998 | orchestrator | 2026-04-09 00:57:59.887003 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-09 00:57:59.887007 | orchestrator | Thursday 09 April 2026 00:55:20 +0000 (0:00:00.474) 0:00:35.306 ******** 2026-04-09 00:57:59.887014 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887018 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887023 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887028 | orchestrator | 2026-04-09 00:57:59.887032 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-09 00:57:59.887046 | orchestrator | Thursday 09 April 2026 00:55:21 +0000 (0:00:00.362) 0:00:35.669 ******** 2026-04-09 00:57:59.887050 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887055 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887059 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887070 | orchestrator | 2026-04-09 00:57:59.887074 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-09 00:57:59.887079 | orchestrator | Thursday 09 April 2026 00:55:21 +0000 (0:00:00.470) 0:00:36.140 ******** 2026-04-09 00:57:59.887083 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887087 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887092 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887096 | orchestrator | 2026-04-09 00:57:59.887101 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-09 00:57:59.887105 | orchestrator | Thursday 09 April 2026 00:55:22 +0000 (0:00:00.298) 0:00:36.439 ******** 2026-04-09 00:57:59.887109 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887114 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887118 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887123 | orchestrator | 2026-04-09 00:57:59.887128 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-09 00:57:59.887132 | orchestrator | Thursday 09 April 2026 00:55:22 +0000 (0:00:00.299) 0:00:36.739 ******** 2026-04-09 00:57:59.887139 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887144 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887148 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887153 | orchestrator | 2026-04-09 00:57:59.887157 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-09 00:57:59.887162 | orchestrator | Thursday 09 April 2026 00:55:22 +0000 (0:00:00.288) 0:00:37.028 ******** 2026-04-09 00:57:59.887166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:57:59.887171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:57:59.887176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:57:59.887180 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887185 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:57:59.887189 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:57:59.887193 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:57:59.887198 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887202 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:57:59.887207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:57:59.887211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:57:59.887218 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887224 | orchestrator | 2026-04-09 00:57:59.887233 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-09 00:57:59.887241 | orchestrator | Thursday 09 April 2026 00:55:23 +0000 (0:00:00.409) 0:00:37.437 ******** 2026-04-09 00:57:59.887247 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887253 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887259 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887266 | orchestrator | 2026-04-09 00:57:59.887272 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-09 00:57:59.887279 | orchestrator | Thursday 09 April 2026 00:55:23 +0000 (0:00:00.522) 0:00:37.960 ******** 2026-04-09 00:57:59.887285 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887291 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887298 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887305 | orchestrator | 2026-04-09 00:57:59.887311 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-09 00:57:59.887317 | orchestrator | Thursday 09 April 2026 00:55:23 +0000 (0:00:00.323) 0:00:38.284 ******** 2026-04-09 00:57:59.887325 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887330 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887334 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887339 | orchestrator | 2026-04-09 00:57:59.887343 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-09 00:57:59.887348 | orchestrator | Thursday 09 April 2026 00:55:24 +0000 (0:00:00.398) 0:00:38.683 ******** 2026-04-09 00:57:59.887353 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887359 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887365 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887372 | orchestrator | 2026-04-09 00:57:59.887378 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-09 00:57:59.887385 | orchestrator | Thursday 09 April 2026 00:55:24 +0000 (0:00:00.291) 0:00:38.974 ******** 2026-04-09 00:57:59.887391 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887398 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887404 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887410 | orchestrator | 2026-04-09 00:57:59.887414 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-09 00:57:59.887418 | orchestrator | Thursday 09 April 2026 00:55:25 +0000 (0:00:00.545) 0:00:39.520 ******** 2026-04-09 00:57:59.887423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887434 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887444 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887450 | orchestrator | 2026-04-09 00:57:59.887455 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-09 00:57:59.887465 | orchestrator | Thursday 09 April 2026 00:55:25 +0000 (0:00:00.367) 0:00:39.887 ******** 2026-04-09 00:57:59.887472 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887477 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887484 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887489 | orchestrator | 2026-04-09 00:57:59.887496 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-09 00:57:59.887507 | orchestrator | Thursday 09 April 2026 00:55:25 +0000 (0:00:00.321) 0:00:40.209 ******** 2026-04-09 00:57:59.887513 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887519 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887525 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887531 | orchestrator | 2026-04-09 00:57:59.887547 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-09 00:57:59.887554 | orchestrator | Thursday 09 April 2026 00:55:26 +0000 (0:00:00.320) 0:00:40.530 ******** 2026-04-09 00:57:59.887572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.887581 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.887603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.887621 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887627 | orchestrator | 2026-04-09 00:57:59.887633 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-09 00:57:59.887639 | orchestrator | Thursday 09 April 2026 00:55:28 +0000 (0:00:02.523) 0:00:43.053 ******** 2026-04-09 00:57:59.887645 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887651 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887657 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887663 | orchestrator | 2026-04-09 00:57:59.887668 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-09 00:57:59.887674 | orchestrator | Thursday 09 April 2026 00:55:28 +0000 (0:00:00.306) 0:00:43.360 ******** 2026-04-09 00:57:59.887687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.887751 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.887767 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:57:59.887785 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887792 | orchestrator | 2026-04-09 00:57:59.887798 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-09 00:57:59.887808 | orchestrator | Thursday 09 April 2026 00:55:30 +0000 (0:00:01.975) 0:00:45.335 ******** 2026-04-09 00:57:59.887815 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887821 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887827 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887833 | orchestrator | 2026-04-09 00:57:59.887839 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-09 00:57:59.887846 | orchestrator | Thursday 09 April 2026 00:55:31 +0000 (0:00:00.307) 0:00:45.642 ******** 2026-04-09 00:57:59.887850 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887853 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887857 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887861 | orchestrator | 2026-04-09 00:57:59.887865 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-09 00:57:59.887869 | orchestrator | Thursday 09 April 2026 00:55:31 +0000 (0:00:00.484) 0:00:46.127 ******** 2026-04-09 00:57:59.887873 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887877 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887880 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887884 | orchestrator | 2026-04-09 00:57:59.887888 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-09 00:57:59.887892 | orchestrator | Thursday 09 April 2026 00:55:32 +0000 (0:00:00.292) 0:00:46.419 ******** 2026-04-09 00:57:59.887895 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887899 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887903 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887907 | orchestrator | 2026-04-09 00:57:59.887911 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-09 00:57:59.887914 | orchestrator | Thursday 09 April 2026 00:55:32 +0000 (0:00:00.483) 0:00:46.903 ******** 2026-04-09 00:57:59.887918 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.887922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.887926 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.887929 | orchestrator | 2026-04-09 00:57:59.887933 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-09 00:57:59.887937 | orchestrator | Thursday 09 April 2026 00:55:32 +0000 (0:00:00.465) 0:00:47.368 ******** 2026-04-09 00:57:59.887941 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:59.887944 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.887948 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:59.887952 | orchestrator | 2026-04-09 00:57:59.887956 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-09 00:57:59.887960 | orchestrator | Thursday 09 April 2026 00:55:33 +0000 (0:00:00.908) 0:00:48.276 ******** 2026-04-09 00:57:59.887971 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.887975 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.887979 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.887982 | orchestrator | 2026-04-09 00:57:59.887986 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-09 00:57:59.887990 | orchestrator | Thursday 09 April 2026 00:55:34 +0000 (0:00:00.301) 0:00:48.578 ******** 2026-04-09 00:57:59.887994 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.887998 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.888001 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.888005 | orchestrator | 2026-04-09 00:57:59.888009 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-09 00:57:59.888013 | orchestrator | Thursday 09 April 2026 00:55:34 +0000 (0:00:00.302) 0:00:48.880 ******** 2026-04-09 00:57:59.888018 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-09 00:57:59.888022 | orchestrator | ...ignoring 2026-04-09 00:57:59.888026 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-09 00:57:59.888030 | orchestrator | ...ignoring 2026-04-09 00:57:59.888034 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-09 00:57:59.888038 | orchestrator | ...ignoring 2026-04-09 00:57:59.888041 | orchestrator | 2026-04-09 00:57:59.888045 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-09 00:57:59.888049 | orchestrator | Thursday 09 April 2026 00:55:45 +0000 (0:00:10.684) 0:00:59.564 ******** 2026-04-09 00:57:59.888053 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888057 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.888060 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.888064 | orchestrator | 2026-04-09 00:57:59.888068 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-09 00:57:59.888072 | orchestrator | Thursday 09 April 2026 00:55:45 +0000 (0:00:00.469) 0:01:00.034 ******** 2026-04-09 00:57:59.888076 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888079 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888083 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888087 | orchestrator | 2026-04-09 00:57:59.888091 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-09 00:57:59.888094 | orchestrator | Thursday 09 April 2026 00:55:45 +0000 (0:00:00.304) 0:01:00.338 ******** 2026-04-09 00:57:59.888098 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888102 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888106 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888109 | orchestrator | 2026-04-09 00:57:59.888113 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-09 00:57:59.888117 | orchestrator | Thursday 09 April 2026 00:55:46 +0000 (0:00:00.308) 0:01:00.647 ******** 2026-04-09 00:57:59.888121 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888124 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888128 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888132 | orchestrator | 2026-04-09 00:57:59.888136 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-09 00:57:59.888140 | orchestrator | Thursday 09 April 2026 00:55:46 +0000 (0:00:00.276) 0:01:00.924 ******** 2026-04-09 00:57:59.888143 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888150 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.888154 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.888158 | orchestrator | 2026-04-09 00:57:59.888161 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-09 00:57:59.888165 | orchestrator | Thursday 09 April 2026 00:55:47 +0000 (0:00:00.473) 0:01:01.398 ******** 2026-04-09 00:57:59.888174 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888180 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888184 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888188 | orchestrator | 2026-04-09 00:57:59.888192 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:57:59.888195 | orchestrator | Thursday 09 April 2026 00:55:47 +0000 (0:00:00.311) 0:01:01.710 ******** 2026-04-09 00:57:59.888199 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888203 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888207 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-09 00:57:59.888211 | orchestrator | 2026-04-09 00:57:59.888215 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-09 00:57:59.888218 | orchestrator | Thursday 09 April 2026 00:55:47 +0000 (0:00:00.366) 0:01:02.076 ******** 2026-04-09 00:57:59.888222 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888226 | orchestrator | 2026-04-09 00:57:59.888230 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-09 00:57:59.888233 | orchestrator | Thursday 09 April 2026 00:55:57 +0000 (0:00:09.547) 0:01:11.624 ******** 2026-04-09 00:57:59.888237 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888241 | orchestrator | 2026-04-09 00:57:59.888245 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:57:59.888248 | orchestrator | Thursday 09 April 2026 00:55:57 +0000 (0:00:00.122) 0:01:11.746 ******** 2026-04-09 00:57:59.888252 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888256 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888260 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888264 | orchestrator | 2026-04-09 00:57:59.888267 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-09 00:57:59.888271 | orchestrator | Thursday 09 April 2026 00:55:58 +0000 (0:00:00.824) 0:01:12.571 ******** 2026-04-09 00:57:59.888275 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888279 | orchestrator | 2026-04-09 00:57:59.888282 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-09 00:57:59.888286 | orchestrator | Thursday 09 April 2026 00:56:06 +0000 (0:00:08.310) 0:01:20.881 ******** 2026-04-09 00:57:59.888290 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888294 | orchestrator | 2026-04-09 00:57:59.888298 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-09 00:57:59.888301 | orchestrator | Thursday 09 April 2026 00:56:08 +0000 (0:00:01.509) 0:01:22.391 ******** 2026-04-09 00:57:59.888305 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888309 | orchestrator | 2026-04-09 00:57:59.888313 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-09 00:57:59.888316 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:02.157) 0:01:24.548 ******** 2026-04-09 00:57:59.888320 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888324 | orchestrator | 2026-04-09 00:57:59.888328 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-09 00:57:59.888331 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:00.123) 0:01:24.672 ******** 2026-04-09 00:57:59.888335 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888339 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888343 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888347 | orchestrator | 2026-04-09 00:57:59.888350 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-09 00:57:59.888354 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:00.526) 0:01:25.198 ******** 2026-04-09 00:57:59.888358 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888362 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:59.888366 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:59.888369 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 00:57:59.888378 | orchestrator | 2026-04-09 00:57:59.888381 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 00:57:59.888385 | orchestrator | skipping: no hosts matched 2026-04-09 00:57:59.888389 | orchestrator | 2026-04-09 00:57:59.888393 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 00:57:59.888396 | orchestrator | 2026-04-09 00:57:59.888400 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 00:57:59.888404 | orchestrator | Thursday 09 April 2026 00:56:11 +0000 (0:00:00.309) 0:01:25.508 ******** 2026-04-09 00:57:59.888408 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:57:59.888412 | orchestrator | 2026-04-09 00:57:59.888415 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 00:57:59.888419 | orchestrator | Thursday 09 April 2026 00:56:27 +0000 (0:00:16.806) 0:01:42.315 ******** 2026-04-09 00:57:59.888423 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.888427 | orchestrator | 2026-04-09 00:57:59.888431 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 00:57:59.888436 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:15.590) 0:01:57.905 ******** 2026-04-09 00:57:59.888441 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.888448 | orchestrator | 2026-04-09 00:57:59.888453 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 00:57:59.888457 | orchestrator | 2026-04-09 00:57:59.888460 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 00:57:59.888464 | orchestrator | Thursday 09 April 2026 00:56:45 +0000 (0:00:02.330) 0:02:00.236 ******** 2026-04-09 00:57:59.888468 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:57:59.888472 | orchestrator | 2026-04-09 00:57:59.888475 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 00:57:59.888479 | orchestrator | Thursday 09 April 2026 00:57:02 +0000 (0:00:16.459) 0:02:16.695 ******** 2026-04-09 00:57:59.888483 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.888487 | orchestrator | 2026-04-09 00:57:59.888493 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 00:57:59.888497 | orchestrator | Thursday 09 April 2026 00:57:17 +0000 (0:00:15.652) 0:02:32.348 ******** 2026-04-09 00:57:59.888501 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.888505 | orchestrator | 2026-04-09 00:57:59.888508 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 00:57:59.888512 | orchestrator | 2026-04-09 00:57:59.888519 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 00:57:59.888523 | orchestrator | Thursday 09 April 2026 00:57:20 +0000 (0:00:02.728) 0:02:35.076 ******** 2026-04-09 00:57:59.888526 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888530 | orchestrator | 2026-04-09 00:57:59.888534 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 00:57:59.888538 | orchestrator | Thursday 09 April 2026 00:57:31 +0000 (0:00:10.446) 0:02:45.523 ******** 2026-04-09 00:57:59.888542 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888545 | orchestrator | 2026-04-09 00:57:59.888549 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 00:57:59.888553 | orchestrator | Thursday 09 April 2026 00:57:35 +0000 (0:00:04.594) 0:02:50.117 ******** 2026-04-09 00:57:59.888557 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888560 | orchestrator | 2026-04-09 00:57:59.888564 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 00:57:59.888568 | orchestrator | 2026-04-09 00:57:59.888572 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 00:57:59.888576 | orchestrator | Thursday 09 April 2026 00:57:38 +0000 (0:00:02.404) 0:02:52.522 ******** 2026-04-09 00:57:59.888579 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:57:59.888583 | orchestrator | 2026-04-09 00:57:59.888587 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-09 00:57:59.888591 | orchestrator | Thursday 09 April 2026 00:57:38 +0000 (0:00:00.512) 0:02:53.035 ******** 2026-04-09 00:57:59.888598 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888602 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888606 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888610 | orchestrator | 2026-04-09 00:57:59.888614 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-09 00:57:59.888617 | orchestrator | Thursday 09 April 2026 00:57:41 +0000 (0:00:02.592) 0:02:55.627 ******** 2026-04-09 00:57:59.888621 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888625 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888629 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888633 | orchestrator | 2026-04-09 00:57:59.888636 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-09 00:57:59.888640 | orchestrator | Thursday 09 April 2026 00:57:43 +0000 (0:00:02.464) 0:02:58.092 ******** 2026-04-09 00:57:59.888644 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888648 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888652 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888655 | orchestrator | 2026-04-09 00:57:59.888659 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-09 00:57:59.888663 | orchestrator | Thursday 09 April 2026 00:57:46 +0000 (0:00:02.429) 0:03:00.522 ******** 2026-04-09 00:57:59.888667 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888671 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888674 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:57:59.888678 | orchestrator | 2026-04-09 00:57:59.888682 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-09 00:57:59.888686 | orchestrator | Thursday 09 April 2026 00:57:48 +0000 (0:00:02.442) 0:03:02.964 ******** 2026-04-09 00:57:59.888690 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.888693 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888697 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.888793 | orchestrator | 2026-04-09 00:57:59.888810 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-09 00:57:59.888814 | orchestrator | Thursday 09 April 2026 00:57:53 +0000 (0:00:04.715) 0:03:07.680 ******** 2026-04-09 00:57:59.888818 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888822 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888826 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888829 | orchestrator | 2026-04-09 00:57:59.888833 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-09 00:57:59.888837 | orchestrator | Thursday 09 April 2026 00:57:55 +0000 (0:00:01.911) 0:03:09.592 ******** 2026-04-09 00:57:59.888841 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888844 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888848 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888852 | orchestrator | 2026-04-09 00:57:59.888856 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-09 00:57:59.888860 | orchestrator | Thursday 09 April 2026 00:57:55 +0000 (0:00:00.493) 0:03:10.086 ******** 2026-04-09 00:57:59.888863 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:57:59.888867 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:57:59.888871 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:57:59.888875 | orchestrator | 2026-04-09 00:57:59.888878 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 00:57:59.888882 | orchestrator | Thursday 09 April 2026 00:57:58 +0000 (0:00:03.125) 0:03:13.211 ******** 2026-04-09 00:57:59.888886 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:57:59.888890 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:57:59.888893 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:57:59.888897 | orchestrator | 2026-04-09 00:57:59.888901 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:57:59.888905 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:57:59.888914 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-04-09 00:57:59.888924 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-04-09 00:57:59.888928 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-04-09 00:57:59.888931 | orchestrator | 2026-04-09 00:57:59.888935 | orchestrator | 2026-04-09 00:57:59.888945 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:57:59.888949 | orchestrator | Thursday 09 April 2026 00:57:59 +0000 (0:00:00.217) 0:03:13.429 ******** 2026-04-09 00:57:59.888952 | orchestrator | =============================================================================== 2026-04-09 00:57:59.888956 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.27s 2026-04-09 00:57:59.888960 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.24s 2026-04-09 00:57:59.888964 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.68s 2026-04-09 00:57:59.888968 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.45s 2026-04-09 00:57:59.888971 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.55s 2026-04-09 00:57:59.888975 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.31s 2026-04-09 00:57:59.888979 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.06s 2026-04-09 00:57:59.888982 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.72s 2026-04-09 00:57:59.888986 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2026-04-09 00:57:59.888990 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.08s 2026-04-09 00:57:59.888994 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.70s 2026-04-09 00:57:59.888997 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.25s 2026-04-09 00:57:59.889001 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.13s 2026-04-09 00:57:59.889005 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.03s 2026-04-09 00:57:59.889009 | orchestrator | Check MariaDB service --------------------------------------------------- 2.94s 2026-04-09 00:57:59.889012 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.63s 2026-04-09 00:57:59.889016 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.59s 2026-04-09 00:57:59.889028 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.52s 2026-04-09 00:57:59.889032 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.46s 2026-04-09 00:57:59.889036 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.44s 2026-04-09 00:57:59.889039 | orchestrator | 2026-04-09 00:57:59 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:57:59.889049 | orchestrator | 2026-04-09 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:02.939652 | orchestrator | 2026-04-09 00:58:02 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:02.942401 | orchestrator | 2026-04-09 00:58:02 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:02.944510 | orchestrator | 2026-04-09 00:58:02 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:58:02.944571 | orchestrator | 2026-04-09 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:05.979972 | orchestrator | 2026-04-09 00:58:05 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:05.981645 | orchestrator | 2026-04-09 00:58:05 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:05.982359 | orchestrator | 2026-04-09 00:58:05 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:58:05.982435 | orchestrator | 2026-04-09 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:09.018120 | orchestrator | 2026-04-09 00:58:09 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:09.021623 | orchestrator | 2026-04-09 00:58:09 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:09.021725 | orchestrator | 2026-04-09 00:58:09 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:58:09.021734 | orchestrator | 2026-04-09 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:12.056281 | orchestrator | 2026-04-09 00:58:12 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:12.088159 | orchestrator | 2026-04-09 00:58:12 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:12.088228 | orchestrator | 2026-04-09 00:58:12 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:58:12.088247 | orchestrator | 2026-04-09 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:15.100385 | orchestrator | 2026-04-09 00:58:15 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:15.100771 | orchestrator | 2026-04-09 00:58:15 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:15.103532 | orchestrator | 2026-04-09 00:58:15 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:58:15.103594 | orchestrator | 2026-04-09 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:18.146395 | orchestrator | 2026-04-09 00:58:18 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:18.146485 | orchestrator | 2026-04-09 00:58:18 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:18.146495 | orchestrator | 2026-04-09 00:58:18 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state STARTED 2026-04-09 00:58:18.146503 | orchestrator | 2026-04-09 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:21.173088 | orchestrator | 2026-04-09 00:58:21 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:21.174571 | orchestrator | 2026-04-09 00:58:21 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:21.175662 | orchestrator | 2026-04-09 00:58:21 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:21.176645 | orchestrator | 2026-04-09 00:58:21 | INFO  | Task 5f777c28-a255-43e1-90a7-7bd4c5194c72 is in state SUCCESS 2026-04-09 00:58:21.176764 | orchestrator | 2026-04-09 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:24.204563 | orchestrator | 2026-04-09 00:58:24 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:24.205808 | orchestrator | 2026-04-09 00:58:24 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:24.206399 | orchestrator | 2026-04-09 00:58:24 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:24.206462 | orchestrator | 2026-04-09 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:27.246386 | orchestrator | 2026-04-09 00:58:27 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:27.246467 | orchestrator | 2026-04-09 00:58:27 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:27.246665 | orchestrator | 2026-04-09 00:58:27 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:27.246722 | orchestrator | 2026-04-09 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:30.295978 | orchestrator | 2026-04-09 00:58:30 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:30.296989 | orchestrator | 2026-04-09 00:58:30 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:30.310390 | orchestrator | 2026-04-09 00:58:30 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:30.310473 | orchestrator | 2026-04-09 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:33.336265 | orchestrator | 2026-04-09 00:58:33 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:33.337582 | orchestrator | 2026-04-09 00:58:33 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:33.340849 | orchestrator | 2026-04-09 00:58:33 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:33.340899 | orchestrator | 2026-04-09 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:36.372412 | orchestrator | 2026-04-09 00:58:36 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:36.374653 | orchestrator | 2026-04-09 00:58:36 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:36.375501 | orchestrator | 2026-04-09 00:58:36 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:36.375549 | orchestrator | 2026-04-09 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:39.408414 | orchestrator | 2026-04-09 00:58:39 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:39.409836 | orchestrator | 2026-04-09 00:58:39 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:39.411899 | orchestrator | 2026-04-09 00:58:39 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:39.411952 | orchestrator | 2026-04-09 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:42.474157 | orchestrator | 2026-04-09 00:58:42 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:42.475092 | orchestrator | 2026-04-09 00:58:42 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:42.476499 | orchestrator | 2026-04-09 00:58:42 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:42.476537 | orchestrator | 2026-04-09 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:45.530391 | orchestrator | 2026-04-09 00:58:45 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:45.531786 | orchestrator | 2026-04-09 00:58:45 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:45.534412 | orchestrator | 2026-04-09 00:58:45 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:45.534465 | orchestrator | 2026-04-09 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:48.594115 | orchestrator | 2026-04-09 00:58:48 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:48.595557 | orchestrator | 2026-04-09 00:58:48 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:48.598524 | orchestrator | 2026-04-09 00:58:48 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:48.598583 | orchestrator | 2026-04-09 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:51.636697 | orchestrator | 2026-04-09 00:58:51 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:51.638265 | orchestrator | 2026-04-09 00:58:51 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:51.639714 | orchestrator | 2026-04-09 00:58:51 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:51.639764 | orchestrator | 2026-04-09 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:54.693728 | orchestrator | 2026-04-09 00:58:54 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:54.694979 | orchestrator | 2026-04-09 00:58:54 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:54.696661 | orchestrator | 2026-04-09 00:58:54 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:54.696697 | orchestrator | 2026-04-09 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:57.746716 | orchestrator | 2026-04-09 00:58:57 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:58:57.748279 | orchestrator | 2026-04-09 00:58:57 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:58:57.750302 | orchestrator | 2026-04-09 00:58:57 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:58:57.750988 | orchestrator | 2026-04-09 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:00.812771 | orchestrator | 2026-04-09 00:59:00 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:00.815442 | orchestrator | 2026-04-09 00:59:00 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:59:00.817887 | orchestrator | 2026-04-09 00:59:00 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:00.817953 | orchestrator | 2026-04-09 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:03.863244 | orchestrator | 2026-04-09 00:59:03 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:03.864231 | orchestrator | 2026-04-09 00:59:03 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:59:03.865502 | orchestrator | 2026-04-09 00:59:03 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:03.865559 | orchestrator | 2026-04-09 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:06.906307 | orchestrator | 2026-04-09 00:59:06 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:06.908146 | orchestrator | 2026-04-09 00:59:06 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:59:06.909698 | orchestrator | 2026-04-09 00:59:06 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:06.909824 | orchestrator | 2026-04-09 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:09.952371 | orchestrator | 2026-04-09 00:59:09 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:09.953416 | orchestrator | 2026-04-09 00:59:09 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state STARTED 2026-04-09 00:59:09.955734 | orchestrator | 2026-04-09 00:59:09 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:09.956160 | orchestrator | 2026-04-09 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:12.992953 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:12.996559 | orchestrator | 2026-04-09 00:59:12.996636 | orchestrator | 2026-04-09 00:59:12.996643 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-09 00:59:12.996649 | orchestrator | 2026-04-09 00:59:12.996653 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-09 00:59:12.996658 | orchestrator | Thursday 09 April 2026 00:57:44 +0000 (0:00:00.225) 0:00:00.225 ******** 2026-04-09 00:59:12.996662 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:12.996668 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996672 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996676 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:12.996680 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996684 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:12.996688 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:12.996691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:12.996695 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:12.996699 | orchestrator | 2026-04-09 00:59:12.996703 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-09 00:59:12.996707 | orchestrator | Thursday 09 April 2026 00:57:49 +0000 (0:00:04.867) 0:00:05.092 ******** 2026-04-09 00:59:12.996710 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:12.996714 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996718 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996722 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:12.996725 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996729 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:12.996733 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:12.996736 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:12.996740 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:12.996744 | orchestrator | 2026-04-09 00:59:12.996748 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-09 00:59:12.996751 | orchestrator | Thursday 09 April 2026 00:57:53 +0000 (0:00:04.420) 0:00:09.512 ******** 2026-04-09 00:59:12.996756 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:59:12.996761 | orchestrator | 2026-04-09 00:59:12.996764 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-09 00:59:12.996784 | orchestrator | Thursday 09 April 2026 00:57:54 +0000 (0:00:00.976) 0:00:10.488 ******** 2026-04-09 00:59:12.996794 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:12.996799 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996808 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996812 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:12.996816 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996820 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:12.996824 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:12.996836 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:12.996840 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:12.996844 | orchestrator | 2026-04-09 00:59:12.996848 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-09 00:59:12.996851 | orchestrator | Thursday 09 April 2026 00:58:08 +0000 (0:00:13.355) 0:00:23.844 ******** 2026-04-09 00:59:12.996855 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-09 00:59:12.996859 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-09 00:59:12.996863 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 00:59:12.996867 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 00:59:12.996880 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 00:59:12.996884 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 00:59:12.996888 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-09 00:59:12.996892 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-09 00:59:12.996895 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-09 00:59:12.996899 | orchestrator | 2026-04-09 00:59:12.996903 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-09 00:59:12.996907 | orchestrator | Thursday 09 April 2026 00:58:11 +0000 (0:00:03.202) 0:00:27.047 ******** 2026-04-09 00:59:12.996911 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:12.996915 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996919 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996923 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:12.996926 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:12.996930 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:12.996934 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:12.996938 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:12.996942 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:12.996946 | orchestrator | 2026-04-09 00:59:12.996949 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:12.996953 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:59:12.996962 | orchestrator | 2026-04-09 00:59:12.996966 | orchestrator | 2026-04-09 00:59:12.996970 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:12.996974 | orchestrator | Thursday 09 April 2026 00:58:18 +0000 (0:00:06.824) 0:00:33.871 ******** 2026-04-09 00:59:12.996977 | orchestrator | =============================================================================== 2026-04-09 00:59:12.996981 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.36s 2026-04-09 00:59:12.996985 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.82s 2026-04-09 00:59:12.996989 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.87s 2026-04-09 00:59:12.996993 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.42s 2026-04-09 00:59:12.996996 | orchestrator | Check if target directories exist --------------------------------------- 3.20s 2026-04-09 00:59:12.997000 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2026-04-09 00:59:12.997004 | orchestrator | 2026-04-09 00:59:12.997008 | orchestrator | 2026-04-09 00:59:12.997012 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-09 00:59:12.997016 | orchestrator | 2026-04-09 00:59:12.997020 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-09 00:59:12.997023 | orchestrator | Thursday 09 April 2026 00:58:21 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-04-09 00:59:12.997027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-09 00:59:12.997032 | orchestrator | 2026-04-09 00:59:12.997036 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-09 00:59:12.997040 | orchestrator | Thursday 09 April 2026 00:58:21 +0000 (0:00:00.204) 0:00:00.473 ******** 2026-04-09 00:59:12.997044 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-09 00:59:12.997048 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-09 00:59:12.997052 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-09 00:59:12.997056 | orchestrator | 2026-04-09 00:59:12.997059 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-09 00:59:12.997063 | orchestrator | Thursday 09 April 2026 00:58:23 +0000 (0:00:01.340) 0:00:01.813 ******** 2026-04-09 00:59:12.997067 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-09 00:59:12.997071 | orchestrator | 2026-04-09 00:59:12.997078 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-09 00:59:12.997082 | orchestrator | Thursday 09 April 2026 00:58:24 +0000 (0:00:00.949) 0:00:02.763 ******** 2026-04-09 00:59:12.997085 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:12.997090 | orchestrator | 2026-04-09 00:59:12.997093 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-09 00:59:12.997097 | orchestrator | Thursday 09 April 2026 00:58:24 +0000 (0:00:00.752) 0:00:03.515 ******** 2026-04-09 00:59:12.997101 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:12.997105 | orchestrator | 2026-04-09 00:59:12.997109 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-09 00:59:12.997115 | orchestrator | Thursday 09 April 2026 00:58:25 +0000 (0:00:00.825) 0:00:04.341 ******** 2026-04-09 00:59:12.997122 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-09 00:59:12.997129 | orchestrator | ok: [testbed-manager] 2026-04-09 00:59:12.997135 | orchestrator | 2026-04-09 00:59:12.997142 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-09 00:59:12.997152 | orchestrator | Thursday 09 April 2026 00:59:01 +0000 (0:00:36.038) 0:00:40.380 ******** 2026-04-09 00:59:12.997158 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-09 00:59:12.997166 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-09 00:59:12.997177 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-09 00:59:12.997183 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-09 00:59:12.997191 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-09 00:59:12.997197 | orchestrator | 2026-04-09 00:59:12.997204 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-09 00:59:12.997211 | orchestrator | Thursday 09 April 2026 00:59:06 +0000 (0:00:04.512) 0:00:44.892 ******** 2026-04-09 00:59:12.997217 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-09 00:59:12.997223 | orchestrator | 2026-04-09 00:59:12.997230 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-09 00:59:12.997237 | orchestrator | Thursday 09 April 2026 00:59:06 +0000 (0:00:00.494) 0:00:45.386 ******** 2026-04-09 00:59:12.997243 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:12.997249 | orchestrator | 2026-04-09 00:59:12.997256 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-09 00:59:12.997263 | orchestrator | Thursday 09 April 2026 00:59:06 +0000 (0:00:00.122) 0:00:45.509 ******** 2026-04-09 00:59:12.997270 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:12.997278 | orchestrator | 2026-04-09 00:59:12.997283 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-09 00:59:12.997288 | orchestrator | Thursday 09 April 2026 00:59:07 +0000 (0:00:00.282) 0:00:45.792 ******** 2026-04-09 00:59:12.997293 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:12.997297 | orchestrator | 2026-04-09 00:59:12.997302 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-09 00:59:12.997306 | orchestrator | Thursday 09 April 2026 00:59:08 +0000 (0:00:01.280) 0:00:47.073 ******** 2026-04-09 00:59:12.997310 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:12.997315 | orchestrator | 2026-04-09 00:59:12.997319 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-09 00:59:12.997324 | orchestrator | Thursday 09 April 2026 00:59:09 +0000 (0:00:00.619) 0:00:47.693 ******** 2026-04-09 00:59:12.997328 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:12.997333 | orchestrator | 2026-04-09 00:59:12.997337 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-09 00:59:12.997341 | orchestrator | Thursday 09 April 2026 00:59:09 +0000 (0:00:00.503) 0:00:48.196 ******** 2026-04-09 00:59:12.997346 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-09 00:59:12.997350 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-09 00:59:12.997355 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-09 00:59:12.997359 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-09 00:59:12.997364 | orchestrator | 2026-04-09 00:59:12.997368 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:12.997373 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:59:12.997378 | orchestrator | 2026-04-09 00:59:12.997382 | orchestrator | 2026-04-09 00:59:12.997386 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:12.997391 | orchestrator | Thursday 09 April 2026 00:59:10 +0000 (0:00:01.312) 0:00:49.509 ******** 2026-04-09 00:59:12.997395 | orchestrator | =============================================================================== 2026-04-09 00:59:12.997399 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.04s 2026-04-09 00:59:12.997404 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.51s 2026-04-09 00:59:12.997408 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.34s 2026-04-09 00:59:12.997412 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.31s 2026-04-09 00:59:12.997417 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.28s 2026-04-09 00:59:12.997421 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.95s 2026-04-09 00:59:12.997429 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.83s 2026-04-09 00:59:12.997433 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.75s 2026-04-09 00:59:12.997438 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.62s 2026-04-09 00:59:12.997442 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.50s 2026-04-09 00:59:12.997447 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2026-04-09 00:59:12.997455 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2026-04-09 00:59:12.997459 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-04-09 00:59:12.997463 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-04-09 00:59:12.997468 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task e3e1b691-6a46-4ae9-878c-afcdda7151a9 is in state SUCCESS 2026-04-09 00:59:12.997565 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:12.999362 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:13.001166 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:13.003604 | orchestrator | 2026-04-09 00:59:13 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:13.003747 | orchestrator | 2026-04-09 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:16.057576 | orchestrator | 2026-04-09 00:59:16 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:16.057898 | orchestrator | 2026-04-09 00:59:16 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:16.058849 | orchestrator | 2026-04-09 00:59:16 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:16.059825 | orchestrator | 2026-04-09 00:59:16 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:16.060694 | orchestrator | 2026-04-09 00:59:16 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:16.060907 | orchestrator | 2026-04-09 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:19.104073 | orchestrator | 2026-04-09 00:59:19 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:19.105869 | orchestrator | 2026-04-09 00:59:19 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:19.109691 | orchestrator | 2026-04-09 00:59:19 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:19.110678 | orchestrator | 2026-04-09 00:59:19 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:19.111591 | orchestrator | 2026-04-09 00:59:19 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:19.111686 | orchestrator | 2026-04-09 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:22.158214 | orchestrator | 2026-04-09 00:59:22 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:22.158425 | orchestrator | 2026-04-09 00:59:22 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:22.159763 | orchestrator | 2026-04-09 00:59:22 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:22.160690 | orchestrator | 2026-04-09 00:59:22 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:22.161964 | orchestrator | 2026-04-09 00:59:22 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:22.161987 | orchestrator | 2026-04-09 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:25.201733 | orchestrator | 2026-04-09 00:59:25 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:25.203871 | orchestrator | 2026-04-09 00:59:25 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:25.205212 | orchestrator | 2026-04-09 00:59:25 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:25.205235 | orchestrator | 2026-04-09 00:59:25 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:25.205937 | orchestrator | 2026-04-09 00:59:25 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:25.205952 | orchestrator | 2026-04-09 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:28.238663 | orchestrator | 2026-04-09 00:59:28 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:28.238855 | orchestrator | 2026-04-09 00:59:28 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:28.239874 | orchestrator | 2026-04-09 00:59:28 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:28.240327 | orchestrator | 2026-04-09 00:59:28 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:28.241247 | orchestrator | 2026-04-09 00:59:28 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:28.241280 | orchestrator | 2026-04-09 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:31.277188 | orchestrator | 2026-04-09 00:59:31 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:31.279770 | orchestrator | 2026-04-09 00:59:31 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:31.282447 | orchestrator | 2026-04-09 00:59:31 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:31.285952 | orchestrator | 2026-04-09 00:59:31 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:31.288441 | orchestrator | 2026-04-09 00:59:31 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:31.288734 | orchestrator | 2026-04-09 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:34.330743 | orchestrator | 2026-04-09 00:59:34 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:34.333256 | orchestrator | 2026-04-09 00:59:34 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:34.334123 | orchestrator | 2026-04-09 00:59:34 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:34.336253 | orchestrator | 2026-04-09 00:59:34 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:34.338507 | orchestrator | 2026-04-09 00:59:34 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:34.338576 | orchestrator | 2026-04-09 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:37.370751 | orchestrator | 2026-04-09 00:59:37 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:37.371744 | orchestrator | 2026-04-09 00:59:37 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:37.373519 | orchestrator | 2026-04-09 00:59:37 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:37.375062 | orchestrator | 2026-04-09 00:59:37 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:37.377670 | orchestrator | 2026-04-09 00:59:37 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:37.377726 | orchestrator | 2026-04-09 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:40.419383 | orchestrator | 2026-04-09 00:59:40 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:40.420331 | orchestrator | 2026-04-09 00:59:40 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:40.422140 | orchestrator | 2026-04-09 00:59:40 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:40.422297 | orchestrator | 2026-04-09 00:59:40 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:40.423384 | orchestrator | 2026-04-09 00:59:40 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:40.423422 | orchestrator | 2026-04-09 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:43.449260 | orchestrator | 2026-04-09 00:59:43 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:43.449611 | orchestrator | 2026-04-09 00:59:43 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:43.450351 | orchestrator | 2026-04-09 00:59:43 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:43.451215 | orchestrator | 2026-04-09 00:59:43 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:43.451917 | orchestrator | 2026-04-09 00:59:43 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:43.451946 | orchestrator | 2026-04-09 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:46.493596 | orchestrator | 2026-04-09 00:59:46 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state STARTED 2026-04-09 00:59:46.496181 | orchestrator | 2026-04-09 00:59:46 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:46.497576 | orchestrator | 2026-04-09 00:59:46 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:46.499551 | orchestrator | 2026-04-09 00:59:46 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:46.501290 | orchestrator | 2026-04-09 00:59:46 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:46.501353 | orchestrator | 2026-04-09 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:49.552233 | orchestrator | 2026-04-09 00:59:49 | INFO  | Task f5b98e2f-8573-4c76-8ca2-76923eb4bf9e is in state SUCCESS 2026-04-09 00:59:49.553919 | orchestrator | 2026-04-09 00:59:49.553995 | orchestrator | 2026-04-09 00:59:49.554006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:59:49.554082 | orchestrator | 2026-04-09 00:59:49.554091 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:59:49.554099 | orchestrator | Thursday 09 April 2026 00:58:02 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-04-09 00:59:49.554106 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.554114 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.554121 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.554128 | orchestrator | 2026-04-09 00:59:49.554134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:59:49.554398 | orchestrator | Thursday 09 April 2026 00:58:02 +0000 (0:00:00.272) 0:00:00.577 ******** 2026-04-09 00:59:49.554406 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-09 00:59:49.554411 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-09 00:59:49.554415 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-09 00:59:49.554419 | orchestrator | 2026-04-09 00:59:49.554423 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-09 00:59:49.554427 | orchestrator | 2026-04-09 00:59:49.554431 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:59:49.554435 | orchestrator | Thursday 09 April 2026 00:58:03 +0000 (0:00:00.307) 0:00:00.885 ******** 2026-04-09 00:59:49.554439 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:49.554444 | orchestrator | 2026-04-09 00:59:49.554448 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-09 00:59:49.554452 | orchestrator | Thursday 09 April 2026 00:58:03 +0000 (0:00:00.620) 0:00:01.505 ******** 2026-04-09 00:59:49.554463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.554494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.554508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.554512 | orchestrator | 2026-04-09 00:59:49.554517 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-09 00:59:49.554520 | orchestrator | Thursday 09 April 2026 00:58:05 +0000 (0:00:01.433) 0:00:02.939 ******** 2026-04-09 00:59:49.554524 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.554532 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.554536 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.554540 | orchestrator | 2026-04-09 00:59:49.554544 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:59:49.554553 | orchestrator | Thursday 09 April 2026 00:58:05 +0000 (0:00:00.247) 0:00:03.187 ******** 2026-04-09 00:59:49.554557 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:59:49.554579 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:59:49.554585 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:59:49.554596 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:59:49.554602 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:59:49.554610 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:59:49.554618 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:59:49.554624 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:59:49.554630 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:59:49.554649 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:59:49.554654 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:59:49.554667 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:59:49.554673 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:59:49.554678 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:59:49.554704 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:59:49.554709 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:59:49.554716 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:59:49.554721 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:59:49.554739 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:59:49.554753 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:59:49.554759 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:59:49.554766 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:59:49.554772 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:59:49.554778 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:59:49.554786 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-09 00:59:49.554795 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-09 00:59:49.554802 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-09 00:59:49.554807 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-09 00:59:49.554811 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-09 00:59:49.554821 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-09 00:59:49.554825 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-09 00:59:49.554828 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-09 00:59:49.554836 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-09 00:59:49.554841 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-09 00:59:49.554845 | orchestrator | 2026-04-09 00:59:49.554849 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.554853 | orchestrator | Thursday 09 April 2026 00:58:06 +0000 (0:00:00.711) 0:00:03.898 ******** 2026-04-09 00:59:49.554857 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.554861 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.554865 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.554869 | orchestrator | 2026-04-09 00:59:49.554877 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.554881 | orchestrator | Thursday 09 April 2026 00:58:06 +0000 (0:00:00.244) 0:00:04.143 ******** 2026-04-09 00:59:49.554885 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.554890 | orchestrator | 2026-04-09 00:59:49.554894 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.554898 | orchestrator | Thursday 09 April 2026 00:58:06 +0000 (0:00:00.118) 0:00:04.261 ******** 2026-04-09 00:59:49.554902 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.554967 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.554971 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.554975 | orchestrator | 2026-04-09 00:59:49.554979 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.554983 | orchestrator | Thursday 09 April 2026 00:58:06 +0000 (0:00:00.259) 0:00:04.521 ******** 2026-04-09 00:59:49.554987 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.554991 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.554994 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.554998 | orchestrator | 2026-04-09 00:59:49.555002 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555006 | orchestrator | Thursday 09 April 2026 00:58:07 +0000 (0:00:00.293) 0:00:04.814 ******** 2026-04-09 00:59:49.555010 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555014 | orchestrator | 2026-04-09 00:59:49.555018 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555021 | orchestrator | Thursday 09 April 2026 00:58:07 +0000 (0:00:00.094) 0:00:04.908 ******** 2026-04-09 00:59:49.555025 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555029 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555033 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555036 | orchestrator | 2026-04-09 00:59:49.555040 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555044 | orchestrator | Thursday 09 April 2026 00:58:07 +0000 (0:00:00.367) 0:00:05.275 ******** 2026-04-09 00:59:49.555048 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555052 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555055 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555059 | orchestrator | 2026-04-09 00:59:49.555063 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555067 | orchestrator | Thursday 09 April 2026 00:58:07 +0000 (0:00:00.293) 0:00:05.569 ******** 2026-04-09 00:59:49.555071 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555079 | orchestrator | 2026-04-09 00:59:49.555083 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555087 | orchestrator | Thursday 09 April 2026 00:58:07 +0000 (0:00:00.131) 0:00:05.701 ******** 2026-04-09 00:59:49.555090 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555094 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555098 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555102 | orchestrator | 2026-04-09 00:59:49.555106 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555110 | orchestrator | Thursday 09 April 2026 00:58:08 +0000 (0:00:00.298) 0:00:05.999 ******** 2026-04-09 00:59:49.555114 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555118 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555122 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555126 | orchestrator | 2026-04-09 00:59:49.555130 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555134 | orchestrator | Thursday 09 April 2026 00:58:08 +0000 (0:00:00.276) 0:00:06.276 ******** 2026-04-09 00:59:49.555138 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555141 | orchestrator | 2026-04-09 00:59:49.555145 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555150 | orchestrator | Thursday 09 April 2026 00:58:08 +0000 (0:00:00.123) 0:00:06.399 ******** 2026-04-09 00:59:49.555156 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555162 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555167 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555173 | orchestrator | 2026-04-09 00:59:49.555179 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555185 | orchestrator | Thursday 09 April 2026 00:58:09 +0000 (0:00:00.455) 0:00:06.855 ******** 2026-04-09 00:59:49.555191 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555197 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555204 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555210 | orchestrator | 2026-04-09 00:59:49.555216 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555222 | orchestrator | Thursday 09 April 2026 00:58:09 +0000 (0:00:00.300) 0:00:07.155 ******** 2026-04-09 00:59:49.555228 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555234 | orchestrator | 2026-04-09 00:59:49.555242 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555250 | orchestrator | Thursday 09 April 2026 00:58:09 +0000 (0:00:00.112) 0:00:07.268 ******** 2026-04-09 00:59:49.555258 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555264 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555270 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555277 | orchestrator | 2026-04-09 00:59:49.555283 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555290 | orchestrator | Thursday 09 April 2026 00:58:09 +0000 (0:00:00.265) 0:00:07.534 ******** 2026-04-09 00:59:49.555297 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555308 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555314 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555320 | orchestrator | 2026-04-09 00:59:49.555326 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555333 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:00.482) 0:00:08.016 ******** 2026-04-09 00:59:49.555339 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555346 | orchestrator | 2026-04-09 00:59:49.555352 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555359 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:00.150) 0:00:08.167 ******** 2026-04-09 00:59:49.555365 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555372 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555385 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555399 | orchestrator | 2026-04-09 00:59:49.555406 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555412 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:00.304) 0:00:08.471 ******** 2026-04-09 00:59:49.555418 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555424 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555431 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555435 | orchestrator | 2026-04-09 00:59:49.555439 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555444 | orchestrator | Thursday 09 April 2026 00:58:11 +0000 (0:00:00.302) 0:00:08.774 ******** 2026-04-09 00:59:49.555450 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555456 | orchestrator | 2026-04-09 00:59:49.555463 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555469 | orchestrator | Thursday 09 April 2026 00:58:11 +0000 (0:00:00.189) 0:00:08.963 ******** 2026-04-09 00:59:49.555475 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555481 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555493 | orchestrator | 2026-04-09 00:59:49.555500 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555506 | orchestrator | Thursday 09 April 2026 00:58:11 +0000 (0:00:00.331) 0:00:09.295 ******** 2026-04-09 00:59:49.555513 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555519 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555525 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555531 | orchestrator | 2026-04-09 00:59:49.555538 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555542 | orchestrator | Thursday 09 April 2026 00:58:12 +0000 (0:00:00.532) 0:00:09.828 ******** 2026-04-09 00:59:49.555546 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555550 | orchestrator | 2026-04-09 00:59:49.555553 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555558 | orchestrator | Thursday 09 April 2026 00:58:12 +0000 (0:00:00.112) 0:00:09.940 ******** 2026-04-09 00:59:49.555614 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555620 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555624 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555629 | orchestrator | 2026-04-09 00:59:49.555634 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555639 | orchestrator | Thursday 09 April 2026 00:58:12 +0000 (0:00:00.272) 0:00:10.213 ******** 2026-04-09 00:59:49.555643 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555647 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555652 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555657 | orchestrator | 2026-04-09 00:59:49.555661 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555666 | orchestrator | Thursday 09 April 2026 00:58:12 +0000 (0:00:00.291) 0:00:10.505 ******** 2026-04-09 00:59:49.555670 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555675 | orchestrator | 2026-04-09 00:59:49.555680 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555684 | orchestrator | Thursday 09 April 2026 00:58:12 +0000 (0:00:00.168) 0:00:10.673 ******** 2026-04-09 00:59:49.555689 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555694 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555699 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555703 | orchestrator | 2026-04-09 00:59:49.555708 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:59:49.555712 | orchestrator | Thursday 09 April 2026 00:58:13 +0000 (0:00:00.477) 0:00:11.150 ******** 2026-04-09 00:59:49.555716 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:49.555721 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:49.555725 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:49.555729 | orchestrator | 2026-04-09 00:59:49.555733 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:59:49.555743 | orchestrator | Thursday 09 April 2026 00:58:13 +0000 (0:00:00.286) 0:00:11.437 ******** 2026-04-09 00:59:49.555748 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555752 | orchestrator | 2026-04-09 00:59:49.555757 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:59:49.555762 | orchestrator | Thursday 09 April 2026 00:58:13 +0000 (0:00:00.124) 0:00:11.562 ******** 2026-04-09 00:59:49.555767 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555771 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555775 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555779 | orchestrator | 2026-04-09 00:59:49.555784 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-09 00:59:49.555789 | orchestrator | Thursday 09 April 2026 00:58:14 +0000 (0:00:00.310) 0:00:11.873 ******** 2026-04-09 00:59:49.555794 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:49.555799 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:49.555803 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:49.555808 | orchestrator | 2026-04-09 00:59:49.555812 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-09 00:59:49.555817 | orchestrator | Thursday 09 April 2026 00:58:15 +0000 (0:00:01.653) 0:00:13.526 ******** 2026-04-09 00:59:49.555822 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:59:49.555832 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:59:49.555837 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:59:49.555842 | orchestrator | 2026-04-09 00:59:49.555847 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-09 00:59:49.555852 | orchestrator | Thursday 09 April 2026 00:58:18 +0000 (0:00:02.711) 0:00:16.238 ******** 2026-04-09 00:59:49.555857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:59:49.555862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:59:49.555873 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:59:49.555878 | orchestrator | 2026-04-09 00:59:49.555882 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-09 00:59:49.555886 | orchestrator | Thursday 09 April 2026 00:58:20 +0000 (0:00:01.984) 0:00:18.222 ******** 2026-04-09 00:59:49.555890 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:59:49.555894 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:59:49.555898 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:59:49.555902 | orchestrator | 2026-04-09 00:59:49.555906 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-09 00:59:49.555910 | orchestrator | Thursday 09 April 2026 00:58:21 +0000 (0:00:01.513) 0:00:19.735 ******** 2026-04-09 00:59:49.555914 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555918 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555924 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555931 | orchestrator | 2026-04-09 00:59:49.555937 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-09 00:59:49.555944 | orchestrator | Thursday 09 April 2026 00:58:22 +0000 (0:00:00.251) 0:00:19.987 ******** 2026-04-09 00:59:49.555950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.555956 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.555962 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.555969 | orchestrator | 2026-04-09 00:59:49.555975 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:59:49.555988 | orchestrator | Thursday 09 April 2026 00:58:22 +0000 (0:00:00.376) 0:00:20.363 ******** 2026-04-09 00:59:49.555995 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:49.556002 | orchestrator | 2026-04-09 00:59:49.556010 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-09 00:59:49.556017 | orchestrator | Thursday 09 April 2026 00:58:23 +0000 (0:00:00.589) 0:00:20.952 ******** 2026-04-09 00:59:49.556042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.556061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.556092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.556098 | orchestrator | 2026-04-09 00:59:49.556102 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-09 00:59:49.556106 | orchestrator | Thursday 09 April 2026 00:58:24 +0000 (0:00:01.460) 0:00:22.413 ******** 2026-04-09 00:59:49.556113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556125 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.556143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556150 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.556158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556169 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.556175 | orchestrator | 2026-04-09 00:59:49.556182 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-09 00:59:49.556188 | orchestrator | Thursday 09 April 2026 00:58:25 +0000 (0:00:00.847) 0:00:23.260 ******** 2026-04-09 00:59:49.556205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556218 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.556225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556255 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.556262 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.556268 | orchestrator | 2026-04-09 00:59:49.556275 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-09 00:59:49.556281 | orchestrator | Thursday 09 April 2026 00:58:27 +0000 (0:00:01.509) 0:00:24.769 ******** 2026-04-09 00:59:49.556293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.556302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.556319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:59:49.556325 | orchestrator | 2026-04-09 00:59:49.556334 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-09 00:59:49.556340 | orchestrator | Thursday 09 April 2026 00:58:28 +0000 (0:00:01.585) 0:00:26.355 ******** 2026-04-09 00:59:49.556346 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:59:49.556353 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:49.556506 | orchestrator | } 2026-04-09 00:59:49.556515 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:59:49.556522 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:49.556527 | orchestrator | } 2026-04-09 00:59:49.556533 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:59:49.556540 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:49.556546 | orchestrator | } 2026-04-09 00:59:49.556552 | orchestrator | 2026-04-09 00:59:49.556559 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:59:49.556587 | orchestrator | Thursday 09 April 2026 00:58:29 +0000 (0:00:00.652) 0:00:27.008 ******** 2026-04-09 00:59:49.556597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556605 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.556629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556646 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.556653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:59:49.556659 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.556665 | orchestrator | 2026-04-09 00:59:49.556671 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:59:49.556685 | orchestrator | Thursday 09 April 2026 00:58:31 +0000 (0:00:02.110) 0:00:29.118 ******** 2026-04-09 00:59:49.556696 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:49.556702 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:49.556708 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:49.556714 | orchestrator | 2026-04-09 00:59:49.556720 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:59:49.556726 | orchestrator | Thursday 09 April 2026 00:58:31 +0000 (0:00:00.246) 0:00:29.365 ******** 2026-04-09 00:59:49.556732 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:49.556738 | orchestrator | 2026-04-09 00:59:49.556749 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-09 00:59:49.556755 | orchestrator | Thursday 09 April 2026 00:58:32 +0000 (0:00:00.643) 0:00:30.008 ******** 2026-04-09 00:59:49.556761 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:49.556767 | orchestrator | 2026-04-09 00:59:49.556773 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-09 00:59:49.556779 | orchestrator | Thursday 09 April 2026 00:58:34 +0000 (0:00:02.455) 0:00:32.464 ******** 2026-04-09 00:59:49.556785 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:49.556791 | orchestrator | 2026-04-09 00:59:49.556797 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-09 00:59:49.556803 | orchestrator | Thursday 09 April 2026 00:58:37 +0000 (0:00:02.471) 0:00:34.935 ******** 2026-04-09 00:59:49.556809 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:49.556815 | orchestrator | 2026-04-09 00:59:49.556821 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 00:59:49.556827 | orchestrator | Thursday 09 April 2026 00:58:55 +0000 (0:00:18.050) 0:00:52.986 ******** 2026-04-09 00:59:49.556833 | orchestrator | 2026-04-09 00:59:49.556839 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 00:59:49.556845 | orchestrator | Thursday 09 April 2026 00:58:55 +0000 (0:00:00.062) 0:00:53.049 ******** 2026-04-09 00:59:49.556851 | orchestrator | 2026-04-09 00:59:49.556857 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 00:59:49.556864 | orchestrator | Thursday 09 April 2026 00:58:55 +0000 (0:00:00.062) 0:00:53.111 ******** 2026-04-09 00:59:49.556870 | orchestrator | 2026-04-09 00:59:49.556876 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-09 00:59:49.556883 | orchestrator | Thursday 09 April 2026 00:58:55 +0000 (0:00:00.069) 0:00:53.180 ******** 2026-04-09 00:59:49.556888 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:49.556895 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:49.556901 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:49.556907 | orchestrator | 2026-04-09 00:59:49.556912 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:49.556920 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-09 00:59:49.556927 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 00:59:49.556935 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 00:59:49.556940 | orchestrator | 2026-04-09 00:59:49.556946 | orchestrator | 2026-04-09 00:59:49.556952 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:49.556958 | orchestrator | Thursday 09 April 2026 00:59:48 +0000 (0:00:53.012) 0:01:46.192 ******** 2026-04-09 00:59:49.556964 | orchestrator | =============================================================================== 2026-04-09 00:59:49.556971 | orchestrator | horizon : Restart horizon container ------------------------------------ 53.01s 2026-04-09 00:59:49.556977 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 18.05s 2026-04-09 00:59:49.556983 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.71s 2026-04-09 00:59:49.556996 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.47s 2026-04-09 00:59:49.557002 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.46s 2026-04-09 00:59:49.557008 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.11s 2026-04-09 00:59:49.557014 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.98s 2026-04-09 00:59:49.557020 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.65s 2026-04-09 00:59:49.557026 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.59s 2026-04-09 00:59:49.557032 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2026-04-09 00:59:49.557037 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.51s 2026-04-09 00:59:49.557043 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.46s 2026-04-09 00:59:49.557049 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.43s 2026-04-09 00:59:49.557055 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.85s 2026-04-09 00:59:49.557061 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-04-09 00:59:49.557068 | orchestrator | service-check-containers : horizon | Notify handlers to restart containers --- 0.65s 2026-04-09 00:59:49.557074 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-04-09 00:59:49.557081 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-09 00:59:49.557094 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2026-04-09 00:59:49.557101 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-04-09 00:59:49.557107 | orchestrator | 2026-04-09 00:59:49 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:49.557114 | orchestrator | 2026-04-09 00:59:49 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:49.558188 | orchestrator | 2026-04-09 00:59:49 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:49.559761 | orchestrator | 2026-04-09 00:59:49 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:49.559806 | orchestrator | 2026-04-09 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:52.604882 | orchestrator | 2026-04-09 00:59:52 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:52.607771 | orchestrator | 2026-04-09 00:59:52 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:52.609265 | orchestrator | 2026-04-09 00:59:52 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:52.611116 | orchestrator | 2026-04-09 00:59:52 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:52.611162 | orchestrator | 2026-04-09 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:55.649972 | orchestrator | 2026-04-09 00:59:55 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:55.650431 | orchestrator | 2026-04-09 00:59:55 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:55.653895 | orchestrator | 2026-04-09 00:59:55 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:55.654517 | orchestrator | 2026-04-09 00:59:55 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:55.654611 | orchestrator | 2026-04-09 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:58.695290 | orchestrator | 2026-04-09 00:59:58 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 00:59:58.695375 | orchestrator | 2026-04-09 00:59:58 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 00:59:58.695482 | orchestrator | 2026-04-09 00:59:58 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state STARTED 2026-04-09 00:59:58.696261 | orchestrator | 2026-04-09 00:59:58 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 00:59:58.696285 | orchestrator | 2026-04-09 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:01.734345 | orchestrator | 2026-04-09 01:00:01 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:01.734592 | orchestrator | 2026-04-09 01:00:01 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state STARTED 2026-04-09 01:00:01.735757 | orchestrator | 2026-04-09 01:00:01 | INFO  | Task 69ad4f87-b957-4a19-ac1e-747ca5046a26 is in state SUCCESS 2026-04-09 01:00:01.737175 | orchestrator | 2026-04-09 01:00:01 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:01.737221 | orchestrator | 2026-04-09 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:04.773096 | orchestrator | 2026-04-09 01:00:04 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:04.774145 | orchestrator | 2026-04-09 01:00:04 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:04.776589 | orchestrator | 2026-04-09 01:00:04 | INFO  | Task a17f063e-9ac1-463c-bf29-73da1d92c820 is in state SUCCESS 2026-04-09 01:00:04.777248 | orchestrator | 2026-04-09 01:00:04 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:04.778182 | orchestrator | 2026-04-09 01:00:04 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:04.778208 | orchestrator | 2026-04-09 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:07.803927 | orchestrator | 2026-04-09 01:00:07 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:07.804210 | orchestrator | 2026-04-09 01:00:07 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:07.805065 | orchestrator | 2026-04-09 01:00:07 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:07.805883 | orchestrator | 2026-04-09 01:00:07 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:07.805924 | orchestrator | 2026-04-09 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:10.835990 | orchestrator | 2026-04-09 01:00:10 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:10.837439 | orchestrator | 2026-04-09 01:00:10 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:10.839123 | orchestrator | 2026-04-09 01:00:10 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:10.841042 | orchestrator | 2026-04-09 01:00:10 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:10.841083 | orchestrator | 2026-04-09 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:13.876894 | orchestrator | 2026-04-09 01:00:13 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:13.877765 | orchestrator | 2026-04-09 01:00:13 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:13.879107 | orchestrator | 2026-04-09 01:00:13 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:13.880217 | orchestrator | 2026-04-09 01:00:13 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:13.880262 | orchestrator | 2026-04-09 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:16.918223 | orchestrator | 2026-04-09 01:00:16 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:16.918297 | orchestrator | 2026-04-09 01:00:16 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:16.919374 | orchestrator | 2026-04-09 01:00:16 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:16.919882 | orchestrator | 2026-04-09 01:00:16 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:16.919915 | orchestrator | 2026-04-09 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:19.959975 | orchestrator | 2026-04-09 01:00:19 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:19.960379 | orchestrator | 2026-04-09 01:00:19 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:19.961038 | orchestrator | 2026-04-09 01:00:19 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:19.962250 | orchestrator | 2026-04-09 01:00:19 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:19.962303 | orchestrator | 2026-04-09 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:22.994778 | orchestrator | 2026-04-09 01:00:22 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:22.997024 | orchestrator | 2026-04-09 01:00:22 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state STARTED 2026-04-09 01:00:22.997100 | orchestrator | 2026-04-09 01:00:22 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:22.997718 | orchestrator | 2026-04-09 01:00:22 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:22.997769 | orchestrator | 2026-04-09 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:26.024994 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:26.025618 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task d953b8d4-d146-45bb-8864-a16495f4f9ce is in state SUCCESS 2026-04-09 01:00:26.027192 | orchestrator | 2026-04-09 01:00:26.027229 | orchestrator | 2026-04-09 01:00:26.027235 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:00:26.027240 | orchestrator | 2026-04-09 01:00:26.027244 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:00:26.027249 | orchestrator | Thursday 09 April 2026 00:59:13 +0000 (0:00:00.182) 0:00:00.182 ******** 2026-04-09 01:00:26.027253 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.027258 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:26.027262 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:26.027266 | orchestrator | 2026-04-09 01:00:26.027270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:00:26.027274 | orchestrator | Thursday 09 April 2026 00:59:14 +0000 (0:00:00.334) 0:00:00.516 ******** 2026-04-09 01:00:26.027278 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-09 01:00:26.027282 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-09 01:00:26.027286 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-09 01:00:26.027290 | orchestrator | 2026-04-09 01:00:26.027294 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-09 01:00:26.027319 | orchestrator | 2026-04-09 01:00:26.027339 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-09 01:00:26.027349 | orchestrator | Thursday 09 April 2026 00:59:14 +0000 (0:00:00.584) 0:00:01.100 ******** 2026-04-09 01:00:26.027355 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.027362 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:26.027367 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:26.027373 | orchestrator | 2026-04-09 01:00:26.027777 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:00:26.027796 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:26.027805 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:26.027811 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:26.027818 | orchestrator | 2026-04-09 01:00:26.027824 | orchestrator | 2026-04-09 01:00:26.027830 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:00:26.027836 | orchestrator | Thursday 09 April 2026 01:00:00 +0000 (0:00:46.162) 0:00:47.263 ******** 2026-04-09 01:00:26.027843 | orchestrator | =============================================================================== 2026-04-09 01:00:26.027850 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 46.16s 2026-04-09 01:00:26.027856 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2026-04-09 01:00:26.027863 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-09 01:00:26.027869 | orchestrator | 2026-04-09 01:00:26.027876 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 01:00:26.027883 | orchestrator | 2.16.14 2026-04-09 01:00:26.027890 | orchestrator | 2026-04-09 01:00:26.027897 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-09 01:00:26.027903 | orchestrator | 2026-04-09 01:00:26.027910 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-09 01:00:26.027917 | orchestrator | Thursday 09 April 2026 00:59:15 +0000 (0:00:00.235) 0:00:00.235 ******** 2026-04-09 01:00:26.027924 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.027931 | orchestrator | 2026-04-09 01:00:26.027938 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-09 01:00:26.027945 | orchestrator | Thursday 09 April 2026 00:59:17 +0000 (0:00:02.692) 0:00:02.927 ******** 2026-04-09 01:00:26.027951 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.027957 | orchestrator | 2026-04-09 01:00:26.027964 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-09 01:00:26.027972 | orchestrator | Thursday 09 April 2026 00:59:19 +0000 (0:00:01.268) 0:00:04.196 ******** 2026-04-09 01:00:26.027979 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.027985 | orchestrator | 2026-04-09 01:00:26.027991 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-09 01:00:26.027998 | orchestrator | Thursday 09 April 2026 00:59:20 +0000 (0:00:01.024) 0:00:05.221 ******** 2026-04-09 01:00:26.028005 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.028012 | orchestrator | 2026-04-09 01:00:26.028019 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-09 01:00:26.028026 | orchestrator | Thursday 09 April 2026 00:59:21 +0000 (0:00:01.187) 0:00:06.408 ******** 2026-04-09 01:00:26.028033 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.028040 | orchestrator | 2026-04-09 01:00:26.028047 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-09 01:00:26.028054 | orchestrator | Thursday 09 April 2026 00:59:22 +0000 (0:00:01.057) 0:00:07.465 ******** 2026-04-09 01:00:26.028060 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.028066 | orchestrator | 2026-04-09 01:00:26.028086 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-09 01:00:26.028093 | orchestrator | Thursday 09 April 2026 00:59:23 +0000 (0:00:01.260) 0:00:08.726 ******** 2026-04-09 01:00:26.028100 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.028107 | orchestrator | 2026-04-09 01:00:26.028113 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-09 01:00:26.028120 | orchestrator | Thursday 09 April 2026 00:59:25 +0000 (0:00:02.129) 0:00:10.855 ******** 2026-04-09 01:00:26.028127 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.028133 | orchestrator | 2026-04-09 01:00:26.028140 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-09 01:00:26.028147 | orchestrator | Thursday 09 April 2026 00:59:27 +0000 (0:00:01.281) 0:00:12.137 ******** 2026-04-09 01:00:26.028153 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:26.028159 | orchestrator | 2026-04-09 01:00:26.028206 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-09 01:00:26.028216 | orchestrator | Thursday 09 April 2026 00:59:38 +0000 (0:00:11.906) 0:00:24.043 ******** 2026-04-09 01:00:26.028222 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:00:26.028228 | orchestrator | 2026-04-09 01:00:26.028235 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 01:00:26.028241 | orchestrator | 2026-04-09 01:00:26.028248 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 01:00:26.028254 | orchestrator | Thursday 09 April 2026 00:59:39 +0000 (0:00:00.135) 0:00:24.179 ******** 2026-04-09 01:00:26.028260 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.028267 | orchestrator | 2026-04-09 01:00:26.028273 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 01:00:26.028280 | orchestrator | 2026-04-09 01:00:26.028287 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 01:00:26.028293 | orchestrator | Thursday 09 April 2026 00:59:40 +0000 (0:00:01.894) 0:00:26.073 ******** 2026-04-09 01:00:26.028299 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:00:26.028306 | orchestrator | 2026-04-09 01:00:26.028313 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 01:00:26.028319 | orchestrator | 2026-04-09 01:00:26.028334 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 01:00:26.028341 | orchestrator | Thursday 09 April 2026 00:59:52 +0000 (0:00:11.605) 0:00:37.679 ******** 2026-04-09 01:00:26.028348 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:00:26.028354 | orchestrator | 2026-04-09 01:00:26.028361 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:00:26.028367 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 01:00:26.028376 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:26.028383 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:26.028389 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:26.028395 | orchestrator | 2026-04-09 01:00:26.028401 | orchestrator | 2026-04-09 01:00:26.028407 | orchestrator | 2026-04-09 01:00:26.028414 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:00:26.028420 | orchestrator | Thursday 09 April 2026 01:00:04 +0000 (0:00:11.711) 0:00:49.391 ******** 2026-04-09 01:00:26.028426 | orchestrator | =============================================================================== 2026-04-09 01:00:26.028433 | orchestrator | Restart ceph manager service ------------------------------------------- 25.21s 2026-04-09 01:00:26.028440 | orchestrator | Create admin user ------------------------------------------------------ 11.91s 2026-04-09 01:00:26.028455 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.69s 2026-04-09 01:00:26.028463 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.13s 2026-04-09 01:00:26.028469 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.28s 2026-04-09 01:00:26.028476 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.27s 2026-04-09 01:00:26.028484 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.26s 2026-04-09 01:00:26.028491 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.19s 2026-04-09 01:00:26.028497 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2026-04-09 01:00:26.028504 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.02s 2026-04-09 01:00:26.028512 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-04-09 01:00:26.028565 | orchestrator | 2026-04-09 01:00:26.028573 | orchestrator | 2026-04-09 01:00:26.028579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:00:26.028585 | orchestrator | 2026-04-09 01:00:26.028591 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:00:26.028598 | orchestrator | Thursday 09 April 2026 00:58:02 +0000 (0:00:00.314) 0:00:00.314 ******** 2026-04-09 01:00:26.028604 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.028611 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:26.028617 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:26.028624 | orchestrator | 2026-04-09 01:00:26.028631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:00:26.028638 | orchestrator | Thursday 09 April 2026 00:58:02 +0000 (0:00:00.283) 0:00:00.597 ******** 2026-04-09 01:00:26.028646 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-09 01:00:26.028653 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-09 01:00:26.028660 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-09 01:00:26.028666 | orchestrator | 2026-04-09 01:00:26.028673 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-09 01:00:26.028679 | orchestrator | 2026-04-09 01:00:26.028685 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 01:00:26.028692 | orchestrator | Thursday 09 April 2026 00:58:03 +0000 (0:00:00.298) 0:00:00.895 ******** 2026-04-09 01:00:26.028698 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:00:26.028706 | orchestrator | 2026-04-09 01:00:26.028713 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-09 01:00:26.028718 | orchestrator | Thursday 09 April 2026 00:58:03 +0000 (0:00:00.674) 0:00:01.570 ******** 2026-04-09 01:00:26.028768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.028781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.028797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.028975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029071 | orchestrator | 2026-04-09 01:00:26.029078 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-09 01:00:26.029084 | orchestrator | Thursday 09 April 2026 00:58:06 +0000 (0:00:02.174) 0:00:03.744 ******** 2026-04-09 01:00:26.029090 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029097 | orchestrator | 2026-04-09 01:00:26.029103 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-09 01:00:26.029109 | orchestrator | Thursday 09 April 2026 00:58:06 +0000 (0:00:00.097) 0:00:03.841 ******** 2026-04-09 01:00:26.029115 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029121 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029126 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029132 | orchestrator | 2026-04-09 01:00:26.029138 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-09 01:00:26.029144 | orchestrator | Thursday 09 April 2026 00:58:06 +0000 (0:00:00.195) 0:00:04.036 ******** 2026-04-09 01:00:26.029150 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:00:26.029156 | orchestrator | 2026-04-09 01:00:26.029162 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 01:00:26.029169 | orchestrator | Thursday 09 April 2026 00:58:07 +0000 (0:00:00.756) 0:00:04.793 ******** 2026-04-09 01:00:26.029173 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:00:26.029178 | orchestrator | 2026-04-09 01:00:26.029181 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-09 01:00:26.029185 | orchestrator | Thursday 09 April 2026 00:58:07 +0000 (0:00:00.542) 0:00:05.335 ******** 2026-04-09 01:00:26.029210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029291 | orchestrator | 2026-04-09 01:00:26.029296 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-09 01:00:26.029302 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:03.237) 0:00:08.573 ******** 2026-04-09 01:00:26.029309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029350 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029369 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029410 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029416 | orchestrator | 2026-04-09 01:00:26.029420 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-09 01:00:26.029424 | orchestrator | Thursday 09 April 2026 00:58:11 +0000 (0:00:00.627) 0:00:09.201 ******** 2026-04-09 01:00:26.029429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029445 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029469 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029492 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029496 | orchestrator | 2026-04-09 01:00:26.029500 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-09 01:00:26.029504 | orchestrator | Thursday 09 April 2026 00:58:12 +0000 (0:00:00.991) 0:00:10.192 ******** 2026-04-09 01:00:26.029513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029587 | orchestrator | 2026-04-09 01:00:26.029592 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-09 01:00:26.029597 | orchestrator | Thursday 09 April 2026 00:58:15 +0000 (0:00:03.342) 0:00:13.534 ******** 2026-04-09 01:00:26.029607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.029640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.029661 | orchestrator | 2026-04-09 01:00:26.029666 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-09 01:00:26.029670 | orchestrator | Thursday 09 April 2026 00:58:21 +0000 (0:00:05.587) 0:00:19.121 ******** 2026-04-09 01:00:26.029675 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.029680 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:00:26.029688 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:00:26.029692 | orchestrator | 2026-04-09 01:00:26.029697 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-09 01:00:26.029702 | orchestrator | Thursday 09 April 2026 00:58:22 +0000 (0:00:01.367) 0:00:20.489 ******** 2026-04-09 01:00:26.029706 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029711 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029715 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029720 | orchestrator | 2026-04-09 01:00:26.029724 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-09 01:00:26.029729 | orchestrator | Thursday 09 April 2026 00:58:23 +0000 (0:00:00.959) 0:00:21.449 ******** 2026-04-09 01:00:26.029734 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029738 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029743 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029747 | orchestrator | 2026-04-09 01:00:26.029752 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-09 01:00:26.029756 | orchestrator | Thursday 09 April 2026 00:58:24 +0000 (0:00:00.298) 0:00:21.748 ******** 2026-04-09 01:00:26.029761 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029766 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029770 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029775 | orchestrator | 2026-04-09 01:00:26.029780 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-09 01:00:26.029785 | orchestrator | Thursday 09 April 2026 00:58:24 +0000 (0:00:00.231) 0:00:21.980 ******** 2026-04-09 01:00:26.029794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029825 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029850 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.029878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.029891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.029897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029904 | orchestrator | 2026-04-09 01:00:26.029910 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 01:00:26.029916 | orchestrator | Thursday 09 April 2026 00:58:24 +0000 (0:00:00.485) 0:00:22.465 ******** 2026-04-09 01:00:26.029921 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.029927 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.029933 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.029939 | orchestrator | 2026-04-09 01:00:26.029945 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-09 01:00:26.029951 | orchestrator | Thursday 09 April 2026 00:58:25 +0000 (0:00:00.668) 0:00:23.134 ******** 2026-04-09 01:00:26.029957 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 01:00:26.029964 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 01:00:26.029970 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 01:00:26.029977 | orchestrator | 2026-04-09 01:00:26.029983 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-09 01:00:26.029989 | orchestrator | Thursday 09 April 2026 00:58:27 +0000 (0:00:01.971) 0:00:25.106 ******** 2026-04-09 01:00:26.029995 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:00:26.030001 | orchestrator | 2026-04-09 01:00:26.030006 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-09 01:00:26.030066 | orchestrator | Thursday 09 April 2026 00:58:28 +0000 (0:00:01.288) 0:00:26.395 ******** 2026-04-09 01:00:26.030074 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.030077 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.030081 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.030085 | orchestrator | 2026-04-09 01:00:26.030089 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-09 01:00:26.030093 | orchestrator | Thursday 09 April 2026 00:58:29 +0000 (0:00:00.756) 0:00:27.151 ******** 2026-04-09 01:00:26.030097 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 01:00:26.030100 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:00:26.030104 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 01:00:26.030108 | orchestrator | 2026-04-09 01:00:26.030112 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-09 01:00:26.030116 | orchestrator | Thursday 09 April 2026 00:58:31 +0000 (0:00:01.743) 0:00:28.895 ******** 2026-04-09 01:00:26.030120 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.030124 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:26.030128 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:26.030132 | orchestrator | 2026-04-09 01:00:26.030142 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-09 01:00:26.030146 | orchestrator | Thursday 09 April 2026 00:58:31 +0000 (0:00:00.427) 0:00:29.322 ******** 2026-04-09 01:00:26.030151 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 01:00:26.030154 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 01:00:26.030163 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 01:00:26.030167 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 01:00:26.030171 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 01:00:26.030175 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 01:00:26.030179 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 01:00:26.030187 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 01:00:26.030191 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 01:00:26.030194 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 01:00:26.030198 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 01:00:26.030202 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 01:00:26.030206 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 01:00:26.030210 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 01:00:26.030213 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 01:00:26.030218 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:00:26.030222 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:00:26.030228 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:00:26.030234 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:00:26.030243 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:00:26.030249 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:00:26.030255 | orchestrator | 2026-04-09 01:00:26.030261 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-09 01:00:26.030267 | orchestrator | Thursday 09 April 2026 00:58:40 +0000 (0:00:09.203) 0:00:38.526 ******** 2026-04-09 01:00:26.030272 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:00:26.030278 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:00:26.030284 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:00:26.030290 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:00:26.030296 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:00:26.030301 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:00:26.030307 | orchestrator | 2026-04-09 01:00:26.030313 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-09 01:00:26.030319 | orchestrator | Thursday 09 April 2026 00:58:43 +0000 (0:00:02.780) 0:00:41.306 ******** 2026-04-09 01:00:26.030330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.030349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.030356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 01:00:26.030363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.030370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.030381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 01:00:26.030392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.030400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.030406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 01:00:26.030412 | orchestrator | 2026-04-09 01:00:26.030418 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-09 01:00:26.030424 | orchestrator | Thursday 09 April 2026 00:58:46 +0000 (0:00:02.791) 0:00:44.098 ******** 2026-04-09 01:00:26.030430 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:00:26.030437 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:00:26.030443 | orchestrator | } 2026-04-09 01:00:26.030449 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:00:26.030456 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:00:26.030462 | orchestrator | } 2026-04-09 01:00:26.030468 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:00:26.030474 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:00:26.030480 | orchestrator | } 2026-04-09 01:00:26.030487 | orchestrator | 2026-04-09 01:00:26.030493 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:00:26.030499 | orchestrator | Thursday 09 April 2026 00:58:46 +0000 (0:00:00.503) 0:00:44.602 ******** 2026-04-09 01:00:26.030503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.030517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.030543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.030550 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.030562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.030569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.030576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.030588 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.030594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 01:00:26.030605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 01:00:26.030614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 01:00:26.030623 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.030629 | orchestrator | 2026-04-09 01:00:26.030635 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 01:00:26.030641 | orchestrator | Thursday 09 April 2026 00:58:47 +0000 (0:00:00.722) 0:00:45.325 ******** 2026-04-09 01:00:26.030648 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.030654 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.030660 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.030667 | orchestrator | 2026-04-09 01:00:26.030673 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-09 01:00:26.030680 | orchestrator | Thursday 09 April 2026 00:58:47 +0000 (0:00:00.258) 0:00:45.583 ******** 2026-04-09 01:00:26.030686 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.030692 | orchestrator | 2026-04-09 01:00:26.030699 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-09 01:00:26.030705 | orchestrator | Thursday 09 April 2026 00:58:50 +0000 (0:00:02.800) 0:00:48.383 ******** 2026-04-09 01:00:26.030711 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.030717 | orchestrator | 2026-04-09 01:00:26.030723 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-09 01:00:26.030735 | orchestrator | Thursday 09 April 2026 00:58:53 +0000 (0:00:02.757) 0:00:51.141 ******** 2026-04-09 01:00:26.030742 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:26.030748 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.030754 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:26.030760 | orchestrator | 2026-04-09 01:00:26.030767 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-09 01:00:26.030773 | orchestrator | Thursday 09 April 2026 00:58:54 +0000 (0:00:01.094) 0:00:52.235 ******** 2026-04-09 01:00:26.030780 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.030786 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:26.030792 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:26.030799 | orchestrator | 2026-04-09 01:00:26.030805 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-09 01:00:26.030811 | orchestrator | Thursday 09 April 2026 00:58:54 +0000 (0:00:00.307) 0:00:52.543 ******** 2026-04-09 01:00:26.030818 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.030824 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.030830 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.030836 | orchestrator | 2026-04-09 01:00:26.030842 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-09 01:00:26.030849 | orchestrator | Thursday 09 April 2026 00:58:55 +0000 (0:00:00.295) 0:00:52.839 ******** 2026-04-09 01:00:26.030855 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.030861 | orchestrator | 2026-04-09 01:00:26.030867 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-09 01:00:26.030874 | orchestrator | Thursday 09 April 2026 00:59:11 +0000 (0:00:16.189) 0:01:09.028 ******** 2026-04-09 01:00:26.030880 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.030886 | orchestrator | 2026-04-09 01:00:26.030893 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 01:00:26.030899 | orchestrator | Thursday 09 April 2026 00:59:23 +0000 (0:00:12.530) 0:01:21.559 ******** 2026-04-09 01:00:26.030905 | orchestrator | 2026-04-09 01:00:26.030911 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 01:00:26.030918 | orchestrator | Thursday 09 April 2026 00:59:23 +0000 (0:00:00.089) 0:01:21.649 ******** 2026-04-09 01:00:26.030924 | orchestrator | 2026-04-09 01:00:26.030930 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 01:00:26.030936 | orchestrator | Thursday 09 April 2026 00:59:24 +0000 (0:00:00.122) 0:01:21.772 ******** 2026-04-09 01:00:26.030943 | orchestrator | 2026-04-09 01:00:26.030949 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-09 01:00:26.030955 | orchestrator | Thursday 09 April 2026 00:59:24 +0000 (0:00:00.308) 0:01:22.080 ******** 2026-04-09 01:00:26.030972 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.030984 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:00:26.030991 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:00:26.030997 | orchestrator | 2026-04-09 01:00:26.031003 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-09 01:00:26.031009 | orchestrator | Thursday 09 April 2026 00:59:35 +0000 (0:00:10.855) 0:01:32.936 ******** 2026-04-09 01:00:26.031015 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.031022 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:00:26.031028 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:00:26.031035 | orchestrator | 2026-04-09 01:00:26.031045 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-09 01:00:26.031051 | orchestrator | Thursday 09 April 2026 00:59:40 +0000 (0:00:05.195) 0:01:38.132 ******** 2026-04-09 01:00:26.031058 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.031073 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:00:26.031080 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:00:26.031093 | orchestrator | 2026-04-09 01:00:26.031100 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 01:00:26.031107 | orchestrator | Thursday 09 April 2026 00:59:52 +0000 (0:00:12.447) 0:01:50.579 ******** 2026-04-09 01:00:26.031119 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:00:26.031126 | orchestrator | 2026-04-09 01:00:26.031133 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-09 01:00:26.031140 | orchestrator | Thursday 09 April 2026 00:59:53 +0000 (0:00:00.581) 0:01:51.161 ******** 2026-04-09 01:00:26.031147 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:26.031154 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.031164 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:26.031170 | orchestrator | 2026-04-09 01:00:26.031177 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-09 01:00:26.031184 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:00.780) 0:01:51.941 ******** 2026-04-09 01:00:26.031190 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:26.031197 | orchestrator | 2026-04-09 01:00:26.031204 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-09 01:00:26.031210 | orchestrator | Thursday 09 April 2026 00:59:55 +0000 (0:00:01.619) 0:01:53.560 ******** 2026-04-09 01:00:26.031218 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-09 01:00:26.031225 | orchestrator | 2026-04-09 01:00:26.031232 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-04-09 01:00:26.031238 | orchestrator | Thursday 09 April 2026 01:00:09 +0000 (0:00:13.735) 0:02:07.296 ******** 2026-04-09 01:00:26.031245 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-09 01:00:26.031251 | orchestrator | 2026-04-09 01:00:26.031259 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-04-09 01:00:26.031265 | orchestrator | Thursday 09 April 2026 01:00:12 +0000 (0:00:03.081) 0:02:10.377 ******** 2026-04-09 01:00:26.031272 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-09 01:00:26.031279 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-09 01:00:26.031287 | orchestrator | 2026-04-09 01:00:26.031294 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-09 01:00:26.031301 | orchestrator | Thursday 09 April 2026 01:00:19 +0000 (0:00:07.162) 0:02:17.540 ******** 2026-04-09 01:00:26.031307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.031314 | orchestrator | 2026-04-09 01:00:26.031321 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-09 01:00:26.031328 | orchestrator | Thursday 09 April 2026 01:00:19 +0000 (0:00:00.119) 0:02:17.659 ******** 2026-04-09 01:00:26.031334 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.031341 | orchestrator | 2026-04-09 01:00:26.031348 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-09 01:00:26.031355 | orchestrator | Thursday 09 April 2026 01:00:20 +0000 (0:00:00.138) 0:02:17.798 ******** 2026-04-09 01:00:26.031361 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.031368 | orchestrator | 2026-04-09 01:00:26.031375 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-04-09 01:00:26.031382 | orchestrator | Thursday 09 April 2026 01:00:20 +0000 (0:00:00.312) 0:02:18.112 ******** 2026-04-09 01:00:26.031389 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.031395 | orchestrator | 2026-04-09 01:00:26.031402 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-09 01:00:26.031409 | orchestrator | Thursday 09 April 2026 01:00:20 +0000 (0:00:00.322) 0:02:18.434 ******** 2026-04-09 01:00:26.031416 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:26.031422 | orchestrator | 2026-04-09 01:00:26.031429 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 01:00:26.031436 | orchestrator | Thursday 09 April 2026 01:00:24 +0000 (0:00:03.696) 0:02:22.130 ******** 2026-04-09 01:00:26.031443 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:00:26.031450 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:00:26.031463 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:00:26.031470 | orchestrator | 2026-04-09 01:00:26.031477 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:00:26.031484 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-09 01:00:26.031492 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:00:26.031498 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:00:26.031505 | orchestrator | 2026-04-09 01:00:26.031512 | orchestrator | 2026-04-09 01:00:26.031534 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:00:26.031541 | orchestrator | Thursday 09 April 2026 01:00:24 +0000 (0:00:00.392) 0:02:22.523 ******** 2026-04-09 01:00:26.031547 | orchestrator | =============================================================================== 2026-04-09 01:00:26.031553 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.19s 2026-04-09 01:00:26.031560 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.74s 2026-04-09 01:00:26.031570 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.53s 2026-04-09 01:00:26.031577 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.45s 2026-04-09 01:00:26.031583 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.86s 2026-04-09 01:00:26.031589 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.20s 2026-04-09 01:00:26.031595 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 7.16s 2026-04-09 01:00:26.031602 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.59s 2026-04-09 01:00:26.031608 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.20s 2026-04-09 01:00:26.031614 | orchestrator | keystone : Creating default user role ----------------------------------- 3.70s 2026-04-09 01:00:26.031620 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.34s 2026-04-09 01:00:26.031627 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.24s 2026-04-09 01:00:26.031637 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 3.08s 2026-04-09 01:00:26.031643 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.80s 2026-04-09 01:00:26.031649 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.79s 2026-04-09 01:00:26.031655 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.78s 2026-04-09 01:00:26.031661 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.76s 2026-04-09 01:00:26.031668 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.17s 2026-04-09 01:00:26.031674 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.97s 2026-04-09 01:00:26.031681 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.74s 2026-04-09 01:00:26.031687 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:26.031693 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:26.031700 | orchestrator | 2026-04-09 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:29.137227 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:29.138643 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:29.140860 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:29.140938 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:29.140948 | orchestrator | 2026-04-09 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:32.178450 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:32.179975 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:32.180932 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:32.182291 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:32.182332 | orchestrator | 2026-04-09 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:35.274920 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:35.275000 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:35.275010 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:35.275017 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:35.275024 | orchestrator | 2026-04-09 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:38.240733 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:38.241249 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:38.241407 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state STARTED 2026-04-09 01:00:38.242302 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:38.242640 | orchestrator | 2026-04-09 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:41.267188 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:41.267278 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:41.267284 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:00:41.267289 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task 1436db0e-8b48-4209-beef-36f1d16b91a3 is in state SUCCESS 2026-04-09 01:00:41.267837 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:41.267861 | orchestrator | 2026-04-09 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:44.292830 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:44.294061 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:44.296920 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:00:44.297908 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:44.298082 | orchestrator | 2026-04-09 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:47.340830 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:47.342256 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:47.344845 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:00:47.346383 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:47.346448 | orchestrator | 2026-04-09 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:50.386782 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:50.386880 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:50.387133 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:00:50.388162 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:50.388192 | orchestrator | 2026-04-09 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:53.432286 | orchestrator | 2026-04-09 01:00:53 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:53.432357 | orchestrator | 2026-04-09 01:00:53 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:53.432928 | orchestrator | 2026-04-09 01:00:53 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:00:53.434357 | orchestrator | 2026-04-09 01:00:53 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:53.434418 | orchestrator | 2026-04-09 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:56.464420 | orchestrator | 2026-04-09 01:00:56 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:56.465076 | orchestrator | 2026-04-09 01:00:56 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:56.465467 | orchestrator | 2026-04-09 01:00:56 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:00:56.466657 | orchestrator | 2026-04-09 01:00:56 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:56.466694 | orchestrator | 2026-04-09 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:59.498003 | orchestrator | 2026-04-09 01:00:59 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:00:59.499849 | orchestrator | 2026-04-09 01:00:59 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:00:59.504889 | orchestrator | 2026-04-09 01:00:59 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:00:59.506459 | orchestrator | 2026-04-09 01:00:59 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:00:59.506596 | orchestrator | 2026-04-09 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:02.537824 | orchestrator | 2026-04-09 01:01:02 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:02.538698 | orchestrator | 2026-04-09 01:01:02 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:02.538777 | orchestrator | 2026-04-09 01:01:02 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:02.539690 | orchestrator | 2026-04-09 01:01:02 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:02.539740 | orchestrator | 2026-04-09 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:05.571205 | orchestrator | 2026-04-09 01:01:05 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:05.571596 | orchestrator | 2026-04-09 01:01:05 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:05.572609 | orchestrator | 2026-04-09 01:01:05 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:05.573442 | orchestrator | 2026-04-09 01:01:05 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:05.573580 | orchestrator | 2026-04-09 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:08.598051 | orchestrator | 2026-04-09 01:01:08 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:08.598338 | orchestrator | 2026-04-09 01:01:08 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:08.600066 | orchestrator | 2026-04-09 01:01:08 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:08.600626 | orchestrator | 2026-04-09 01:01:08 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:08.600655 | orchestrator | 2026-04-09 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:11.632531 | orchestrator | 2026-04-09 01:01:11 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:11.634926 | orchestrator | 2026-04-09 01:01:11 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:11.637108 | orchestrator | 2026-04-09 01:01:11 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:11.638922 | orchestrator | 2026-04-09 01:01:11 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:11.638985 | orchestrator | 2026-04-09 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:14.677943 | orchestrator | 2026-04-09 01:01:14 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:14.678053 | orchestrator | 2026-04-09 01:01:14 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:14.678061 | orchestrator | 2026-04-09 01:01:14 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:14.678065 | orchestrator | 2026-04-09 01:01:14 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:14.678070 | orchestrator | 2026-04-09 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:17.696934 | orchestrator | 2026-04-09 01:01:17 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:17.699229 | orchestrator | 2026-04-09 01:01:17 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:17.700420 | orchestrator | 2026-04-09 01:01:17 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:17.702203 | orchestrator | 2026-04-09 01:01:17 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:17.702263 | orchestrator | 2026-04-09 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:20.732625 | orchestrator | 2026-04-09 01:01:20 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:20.732686 | orchestrator | 2026-04-09 01:01:20 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:20.734184 | orchestrator | 2026-04-09 01:01:20 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:20.734625 | orchestrator | 2026-04-09 01:01:20 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:20.734644 | orchestrator | 2026-04-09 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:23.760131 | orchestrator | 2026-04-09 01:01:23 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:23.760721 | orchestrator | 2026-04-09 01:01:23 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:23.761313 | orchestrator | 2026-04-09 01:01:23 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:23.762097 | orchestrator | 2026-04-09 01:01:23 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:23.762360 | orchestrator | 2026-04-09 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:26.863433 | orchestrator | 2026-04-09 01:01:26 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:26.864351 | orchestrator | 2026-04-09 01:01:26 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:26.868582 | orchestrator | 2026-04-09 01:01:26 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:26.918255 | orchestrator | 2026-04-09 01:01:26 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:26.918781 | orchestrator | 2026-04-09 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:29.962656 | orchestrator | 2026-04-09 01:01:29 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:29.962763 | orchestrator | 2026-04-09 01:01:29 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:29.964224 | orchestrator | 2026-04-09 01:01:29 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:29.965023 | orchestrator | 2026-04-09 01:01:29 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:29.965074 | orchestrator | 2026-04-09 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:33.002181 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:33.002418 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:33.003817 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:33.005422 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:33.005640 | orchestrator | 2026-04-09 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:36.033152 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:36.033749 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:36.035115 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:36.035522 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:36.035590 | orchestrator | 2026-04-09 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:39.080946 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:39.081050 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:39.081822 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:39.083079 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:39.083113 | orchestrator | 2026-04-09 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:42.124865 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:42.124950 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:42.125373 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:42.126499 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:42.126548 | orchestrator | 2026-04-09 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:45.170380 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:45.172748 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:45.174607 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:45.176263 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state STARTED 2026-04-09 01:01:45.176380 | orchestrator | 2026-04-09 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:48.212172 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:48.213313 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:48.213708 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:48.216088 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task 0b742453-cf3d-45f7-b46f-2c59de5b8b49 is in state SUCCESS 2026-04-09 01:01:48.216128 | orchestrator | 2026-04-09 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:48.218058 | orchestrator | 2026-04-09 01:01:48.218100 | orchestrator | 2026-04-09 01:01:48.218107 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:01:48.218114 | orchestrator | 2026-04-09 01:01:48.218120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:01:48.218134 | orchestrator | Thursday 09 April 2026 01:00:05 +0000 (0:00:00.511) 0:00:00.511 ******** 2026-04-09 01:01:48.218186 | orchestrator | ok: [testbed-manager] 2026-04-09 01:01:48.218194 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:01:48.218201 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:01:48.218207 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:01:48.218214 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:01:48.218220 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:01:48.218226 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:01:48.218232 | orchestrator | 2026-04-09 01:01:48.218238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:01:48.218244 | orchestrator | Thursday 09 April 2026 01:00:06 +0000 (0:00:00.552) 0:00:01.063 ******** 2026-04-09 01:01:48.218251 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-09 01:01:48.218376 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-09 01:01:48.218388 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-09 01:01:48.218394 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-09 01:01:48.218401 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-09 01:01:48.218407 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-09 01:01:48.218414 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-09 01:01:48.218613 | orchestrator | 2026-04-09 01:01:48.218630 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-09 01:01:48.218638 | orchestrator | 2026-04-09 01:01:48.218644 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-09 01:01:48.218650 | orchestrator | Thursday 09 April 2026 01:00:07 +0000 (0:00:00.595) 0:00:01.658 ******** 2026-04-09 01:01:48.218658 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:01:48.218665 | orchestrator | 2026-04-09 01:01:48.218672 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-04-09 01:01:48.218680 | orchestrator | Thursday 09 April 2026 01:00:07 +0000 (0:00:00.894) 0:00:02.552 ******** 2026-04-09 01:01:48.218686 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-09 01:01:48.218692 | orchestrator | 2026-04-09 01:01:48.218699 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-04-09 01:01:48.218705 | orchestrator | Thursday 09 April 2026 01:00:11 +0000 (0:00:04.000) 0:00:06.553 ******** 2026-04-09 01:01:48.218712 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-09 01:01:48.218720 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-09 01:01:48.218726 | orchestrator | 2026-04-09 01:01:48.218732 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-09 01:01:48.218739 | orchestrator | Thursday 09 April 2026 01:00:18 +0000 (0:00:06.803) 0:00:13.356 ******** 2026-04-09 01:01:48.218746 | orchestrator | changed: [testbed-manager] => (item=service) 2026-04-09 01:01:48.218752 | orchestrator | 2026-04-09 01:01:48.218758 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-09 01:01:48.218765 | orchestrator | Thursday 09 April 2026 01:00:21 +0000 (0:00:03.002) 0:00:16.359 ******** 2026-04-09 01:01:48.218771 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-09 01:01:48.218778 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:01:48.218784 | orchestrator | 2026-04-09 01:01:48.218790 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-09 01:01:48.218796 | orchestrator | Thursday 09 April 2026 01:00:25 +0000 (0:00:03.292) 0:00:19.651 ******** 2026-04-09 01:01:48.218803 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-09 01:01:48.218809 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-09 01:01:48.218816 | orchestrator | 2026-04-09 01:01:48.218850 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-04-09 01:01:48.218855 | orchestrator | Thursday 09 April 2026 01:00:31 +0000 (0:00:06.184) 0:00:25.836 ******** 2026-04-09 01:01:48.218859 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-09 01:01:48.218863 | orchestrator | 2026-04-09 01:01:48.218867 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:01:48.218872 | orchestrator | testbed-manager : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:01:48.218876 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:01:48.218887 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:01:48.218897 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:01:48.218901 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:01:48.218914 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:01:48.218918 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:01:48.219057 | orchestrator | 2026-04-09 01:01:48.219062 | orchestrator | 2026-04-09 01:01:48.219066 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:01:48.219070 | orchestrator | Thursday 09 April 2026 01:00:38 +0000 (0:00:06.976) 0:00:32.812 ******** 2026-04-09 01:01:48.219074 | orchestrator | =============================================================================== 2026-04-09 01:01:48.219077 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 6.98s 2026-04-09 01:01:48.219081 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 6.80s 2026-04-09 01:01:48.219085 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.18s 2026-04-09 01:01:48.219089 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 4.00s 2026-04-09 01:01:48.219093 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.29s 2026-04-09 01:01:48.219097 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.00s 2026-04-09 01:01:48.219100 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 0.89s 2026-04-09 01:01:48.219104 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-04-09 01:01:48.219108 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2026-04-09 01:01:48.219112 | orchestrator | 2026-04-09 01:01:48.219116 | orchestrator | 2026-04-09 01:01:48.219119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:01:48.219123 | orchestrator | 2026-04-09 01:01:48.219232 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:01:48.219236 | orchestrator | Thursday 09 April 2026 00:59:14 +0000 (0:00:00.324) 0:00:00.324 ******** 2026-04-09 01:01:48.219240 | orchestrator | ok: [testbed-manager] 2026-04-09 01:01:48.219279 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:01:48.219285 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:01:48.219289 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:01:48.219292 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:01:48.219296 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:01:48.219300 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:01:48.219304 | orchestrator | 2026-04-09 01:01:48.219308 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:01:48.219311 | orchestrator | Thursday 09 April 2026 00:59:14 +0000 (0:00:00.777) 0:00:01.101 ******** 2026-04-09 01:01:48.219315 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-09 01:01:48.219319 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-09 01:01:48.219323 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-09 01:01:48.219327 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-09 01:01:48.219331 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-09 01:01:48.219334 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-09 01:01:48.219338 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-09 01:01:48.219342 | orchestrator | 2026-04-09 01:01:48.219346 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-09 01:01:48.219354 | orchestrator | 2026-04-09 01:01:48.219358 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 01:01:48.219362 | orchestrator | Thursday 09 April 2026 00:59:15 +0000 (0:00:00.938) 0:00:02.039 ******** 2026-04-09 01:01:48.219366 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:01:48.219370 | orchestrator | 2026-04-09 01:01:48.219374 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-09 01:01:48.219378 | orchestrator | Thursday 09 April 2026 00:59:16 +0000 (0:00:01.230) 0:00:03.270 ******** 2026-04-09 01:01:48.219388 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 01:01:48.219407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219450 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219474 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:01:48.219479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219502 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219593 | orchestrator | 2026-04-09 01:01:48.219597 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 01:01:48.219601 | orchestrator | Thursday 09 April 2026 00:59:20 +0000 (0:00:03.725) 0:00:06.996 ******** 2026-04-09 01:01:48.219605 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:01:48.219609 | orchestrator | 2026-04-09 01:01:48.219613 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-09 01:01:48.219617 | orchestrator | Thursday 09 April 2026 00:59:22 +0000 (0:00:01.301) 0:00:08.297 ******** 2026-04-09 01:01:48.219621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219645 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 01:01:48.219654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219685 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.219689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:01:48.219776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.219782 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.219798 | orchestrator | 2026-04-09 01:01:48.219802 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-09 01:01:48.219806 | orchestrator | Thursday 09 April 2026 00:59:27 +0000 (0:00:05.343) 0:00:13.641 ******** 2026-04-09 01:01:48.219812 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 01:01:48.219825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.219833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.219837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.219845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219849 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.219874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.219879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.219887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219895 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:01:48.219909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219928 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.219933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219943 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.219948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.219960 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.219976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219981 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.219985 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.219990 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.219995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220000 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.220004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220019 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.220023 | orchestrator | 2026-04-09 01:01:48.220028 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-09 01:01:48.220037 | orchestrator | Thursday 09 April 2026 00:59:29 +0000 (0:00:01.894) 0:00:15.535 ******** 2026-04-09 01:01:48.220041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220056 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 01:01:48.220061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220087 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.220101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220110 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220122 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.220126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220135 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:01:48.220148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220152 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220156 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.220160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.220197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.220205 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.220209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220213 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.220217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220228 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.220232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.220235 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.220239 | orchestrator | 2026-04-09 01:01:48.220243 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-09 01:01:48.220247 | orchestrator | Thursday 09 April 2026 00:59:31 +0000 (0:00:02.275) 0:00:17.811 ******** 2026-04-09 01:01:48.220262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 01:01:48.220267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.220271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.220275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.220282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.220286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.220290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.220295 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.220301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220309 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220346 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:01:48.220352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220360 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220377 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.220392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.220404 | orchestrator | 2026-04-09 01:01:48.220408 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-09 01:01:48.220412 | orchestrator | Thursday 09 April 2026 00:59:37 +0000 (0:00:06.191) 0:00:24.002 ******** 2026-04-09 01:01:48.220416 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:01:48.220419 | orchestrator | 2026-04-09 01:01:48.220423 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-09 01:01:48.220493 | orchestrator | Thursday 09 April 2026 00:59:38 +0000 (0:00:00.902) 0:00:24.904 ******** 2026-04-09 01:01:48.220497 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.220501 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.220505 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.220509 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.220513 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.220516 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.220520 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.220524 | orchestrator | 2026-04-09 01:01:48.220528 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-09 01:01:48.220534 | orchestrator | Thursday 09 April 2026 00:59:39 +0000 (0:00:00.821) 0:00:25.725 ******** 2026-04-09 01:01:48.220538 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:01:48.220542 | orchestrator | 2026-04-09 01:01:48.220546 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-09 01:01:48.220550 | orchestrator | Thursday 09 April 2026 00:59:40 +0000 (0:00:00.732) 0:00:26.458 ******** 2026-04-09 01:01:48.220554 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.220560 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220564 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-09 01:01:48.220568 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220572 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-09 01:01:48.220580 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:01:48.220584 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.220588 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220591 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-09 01:01:48.220595 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220599 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-09 01:01:48.220603 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:01:48.220607 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.220611 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220615 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-09 01:01:48.220618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220622 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-09 01:01:48.220626 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:01:48.220630 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.220634 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220637 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-09 01:01:48.220641 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220645 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-09 01:01:48.220651 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 01:01:48.220658 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.220663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220674 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-09 01:01:48.220680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220685 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-09 01:01:48.220691 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 01:01:48.220697 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.220703 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220708 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-09 01:01:48.220713 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220718 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-09 01:01:48.220724 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:01:48.220730 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.220736 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220741 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-09 01:01:48.220747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:01:48.220752 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-09 01:01:48.220758 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:01:48.220764 | orchestrator | 2026-04-09 01:01:48.220769 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-09 01:01:48.220775 | orchestrator | Thursday 09 April 2026 00:59:42 +0000 (0:00:02.286) 0:00:28.745 ******** 2026-04-09 01:01:48.220780 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:01:48.220786 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.220792 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:01:48.220798 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.220803 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:01:48.220816 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.220821 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:01:48.220827 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.220833 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:01:48.220839 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.220845 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:01:48.220852 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.220858 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-09 01:01:48.220864 | orchestrator | 2026-04-09 01:01:48.220870 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-09 01:01:48.220876 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:12.433) 0:00:41.179 ******** 2026-04-09 01:01:48.220886 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:01:48.220892 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.220898 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:01:48.220908 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.220914 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:01:48.220921 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.220927 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:01:48.220932 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.220939 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:01:48.220945 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.220951 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:01:48.220957 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.220963 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-09 01:01:48.220970 | orchestrator | 2026-04-09 01:01:48.220975 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-09 01:01:48.220981 | orchestrator | Thursday 09 April 2026 00:59:57 +0000 (0:00:03.063) 0:00:44.242 ******** 2026-04-09 01:01:48.220987 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:01:48.220994 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.221000 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:01:48.221006 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.221012 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:01:48.221019 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.221025 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:01:48.221031 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221037 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:01:48.221042 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221048 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:01:48.221055 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221061 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-09 01:01:48.221073 | orchestrator | 2026-04-09 01:01:48.221079 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-09 01:01:48.221085 | orchestrator | Thursday 09 April 2026 00:59:59 +0000 (0:00:01.434) 0:00:45.676 ******** 2026-04-09 01:01:48.221092 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:01:48.221098 | orchestrator | 2026-04-09 01:01:48.221104 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-09 01:01:48.221110 | orchestrator | Thursday 09 April 2026 01:00:00 +0000 (0:00:00.702) 0:00:46.379 ******** 2026-04-09 01:01:48.221116 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.221122 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.221129 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.221135 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.221142 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221147 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221150 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221154 | orchestrator | 2026-04-09 01:01:48.221158 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-09 01:01:48.221162 | orchestrator | Thursday 09 April 2026 01:00:00 +0000 (0:00:00.768) 0:00:47.148 ******** 2026-04-09 01:01:48.221166 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.221170 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221173 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221177 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221181 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:01:48.221185 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:01:48.221188 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:01:48.221192 | orchestrator | 2026-04-09 01:01:48.221196 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-09 01:01:48.221200 | orchestrator | Thursday 09 April 2026 01:00:02 +0000 (0:00:02.015) 0:00:49.163 ******** 2026-04-09 01:01:48.221203 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:01:48.221207 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.221211 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:01:48.221215 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.221218 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:01:48.221222 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.221226 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:01:48.221234 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221238 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:01:48.221241 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.221245 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:01:48.221253 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221257 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:01:48.221261 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221265 | orchestrator | 2026-04-09 01:01:48.221268 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-09 01:01:48.221272 | orchestrator | Thursday 09 April 2026 01:00:04 +0000 (0:00:01.726) 0:00:50.890 ******** 2026-04-09 01:01:48.221276 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:01:48.221280 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.221284 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:01:48.221291 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.221294 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:01:48.221298 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.221302 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:01:48.221306 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221310 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:01:48.221314 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221317 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:01:48.221321 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221325 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-09 01:01:48.221329 | orchestrator | 2026-04-09 01:01:48.221333 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-09 01:01:48.221337 | orchestrator | Thursday 09 April 2026 01:00:06 +0000 (0:00:01.900) 0:00:52.790 ******** 2026-04-09 01:01:48.221340 | orchestrator | [WARNING]: Skipped 2026-04-09 01:01:48.221344 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-09 01:01:48.221348 | orchestrator | due to this access issue: 2026-04-09 01:01:48.221352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-09 01:01:48.221356 | orchestrator | not a directory 2026-04-09 01:01:48.221360 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:01:48.221364 | orchestrator | 2026-04-09 01:01:48.221367 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-09 01:01:48.221371 | orchestrator | Thursday 09 April 2026 01:00:07 +0000 (0:00:01.026) 0:00:53.816 ******** 2026-04-09 01:01:48.221375 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.221381 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.221386 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.221390 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.221393 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221397 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221401 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221405 | orchestrator | 2026-04-09 01:01:48.221409 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-09 01:01:48.221412 | orchestrator | Thursday 09 April 2026 01:00:08 +0000 (0:00:00.614) 0:00:54.431 ******** 2026-04-09 01:01:48.221416 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.221420 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.221424 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.221443 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.221447 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221451 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221455 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221458 | orchestrator | 2026-04-09 01:01:48.221462 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-09 01:01:48.221466 | orchestrator | Thursday 09 April 2026 01:00:08 +0000 (0:00:00.695) 0:00:55.127 ******** 2026-04-09 01:01:48.221474 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 01:01:48.221486 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.221490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.221494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.221498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.221502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.221506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.221511 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:01:48.221527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221548 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:01:48.221559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221584 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221593 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:01:48.221617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:01:48.221633 | orchestrator | 2026-04-09 01:01:48.221638 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-09 01:01:48.221642 | orchestrator | Thursday 09 April 2026 01:00:13 +0000 (0:00:04.438) 0:00:59.565 ******** 2026-04-09 01:01:48.221647 | orchestrator | changed: [testbed-manager] => { 2026-04-09 01:01:48.221652 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:01:48.221656 | orchestrator | } 2026-04-09 01:01:48.221661 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:01:48.221665 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:01:48.221670 | orchestrator | } 2026-04-09 01:01:48.221674 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:01:48.221679 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:01:48.221684 | orchestrator | } 2026-04-09 01:01:48.221688 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:01:48.221692 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:01:48.221697 | orchestrator | } 2026-04-09 01:01:48.221702 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 01:01:48.221706 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:01:48.221711 | orchestrator | } 2026-04-09 01:01:48.221715 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 01:01:48.221720 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:01:48.221724 | orchestrator | } 2026-04-09 01:01:48.221728 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 01:01:48.221733 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:01:48.221737 | orchestrator | } 2026-04-09 01:01:48.221741 | orchestrator | 2026-04-09 01:01:48.221746 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:01:48.221750 | orchestrator | Thursday 09 April 2026 01:00:13 +0000 (0:00:00.613) 0:01:00.179 ******** 2026-04-09 01:01:48.221761 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 01:01:48.221766 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.221771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:01:48.221784 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221788 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.221795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.221803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.221829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221853 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:01:48.221857 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:01:48.221862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.221867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:01:48.221889 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:01:48.221894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.221900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221912 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:01:48.221917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.221925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221934 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:01:48.221938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:01:48.221942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:01:48.221950 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:01:48.221954 | orchestrator | 2026-04-09 01:01:48.221957 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-09 01:01:48.221963 | orchestrator | Thursday 09 April 2026 01:00:15 +0000 (0:00:01.558) 0:01:01.738 ******** 2026-04-09 01:01:48.221967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 01:01:48.221971 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:01:48.221975 | orchestrator | 2026-04-09 01:01:48.221979 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:01:48.221985 | orchestrator | Thursday 09 April 2026 01:00:16 +0000 (0:00:01.027) 0:01:02.765 ******** 2026-04-09 01:01:48.221989 | orchestrator | 2026-04-09 01:01:48.221993 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:01:48.221996 | orchestrator | Thursday 09 April 2026 01:00:16 +0000 (0:00:00.165) 0:01:02.930 ******** 2026-04-09 01:01:48.222000 | orchestrator | 2026-04-09 01:01:48.222004 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:01:48.222008 | orchestrator | Thursday 09 April 2026 01:00:16 +0000 (0:00:00.060) 0:01:02.991 ******** 2026-04-09 01:01:48.222031 | orchestrator | 2026-04-09 01:01:48.222039 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:01:48.222043 | orchestrator | Thursday 09 April 2026 01:00:16 +0000 (0:00:00.056) 0:01:03.048 ******** 2026-04-09 01:01:48.222047 | orchestrator | 2026-04-09 01:01:48.222050 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:01:48.222054 | orchestrator | Thursday 09 April 2026 01:00:16 +0000 (0:00:00.058) 0:01:03.106 ******** 2026-04-09 01:01:48.222058 | orchestrator | 2026-04-09 01:01:48.222062 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:01:48.222066 | orchestrator | Thursday 09 April 2026 01:00:16 +0000 (0:00:00.060) 0:01:03.167 ******** 2026-04-09 01:01:48.222070 | orchestrator | 2026-04-09 01:01:48.222073 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:01:48.222077 | orchestrator | Thursday 09 April 2026 01:00:16 +0000 (0:00:00.064) 0:01:03.231 ******** 2026-04-09 01:01:48.222081 | orchestrator | 2026-04-09 01:01:48.222085 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-09 01:01:48.222089 | orchestrator | Thursday 09 April 2026 01:00:17 +0000 (0:00:00.080) 0:01:03.312 ******** 2026-04-09 01:01:48.222092 | orchestrator | changed: [testbed-manager] 2026-04-09 01:01:48.222098 | orchestrator | 2026-04-09 01:01:48.222104 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-09 01:01:48.222110 | orchestrator | Thursday 09 April 2026 01:00:33 +0000 (0:00:16.562) 0:01:19.875 ******** 2026-04-09 01:01:48.222116 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:01:48.222122 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:01:48.222129 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:01:48.222134 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:01:48.222140 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:01:48.222145 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:01:48.222151 | orchestrator | changed: [testbed-manager] 2026-04-09 01:01:48.222157 | orchestrator | 2026-04-09 01:01:48.222163 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-09 01:01:48.222168 | orchestrator | Thursday 09 April 2026 01:00:45 +0000 (0:00:12.155) 0:01:32.031 ******** 2026-04-09 01:01:48.222174 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:01:48.222180 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:01:48.222186 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:01:48.222193 | orchestrator | 2026-04-09 01:01:48.222199 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-09 01:01:48.222205 | orchestrator | Thursday 09 April 2026 01:00:55 +0000 (0:00:10.134) 0:01:42.165 ******** 2026-04-09 01:01:48.222211 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:01:48.222217 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:01:48.222224 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:01:48.222230 | orchestrator | 2026-04-09 01:01:48.222237 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-09 01:01:48.222241 | orchestrator | Thursday 09 April 2026 01:01:06 +0000 (0:00:10.882) 0:01:53.048 ******** 2026-04-09 01:01:48.222245 | orchestrator | changed: [testbed-manager] 2026-04-09 01:01:48.222249 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:01:48.222253 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:01:48.222257 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:01:48.222260 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:01:48.222264 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:01:48.222268 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:01:48.222272 | orchestrator | 2026-04-09 01:01:48.222276 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-09 01:01:48.222279 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:15.156) 0:02:08.205 ******** 2026-04-09 01:01:48.222283 | orchestrator | changed: [testbed-manager] 2026-04-09 01:01:48.222287 | orchestrator | 2026-04-09 01:01:48.222291 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-09 01:01:48.222295 | orchestrator | Thursday 09 April 2026 01:01:29 +0000 (0:00:07.286) 0:02:15.492 ******** 2026-04-09 01:01:48.222302 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:01:48.222306 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:01:48.222310 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:01:48.222314 | orchestrator | 2026-04-09 01:01:48.222318 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-09 01:01:48.222321 | orchestrator | Thursday 09 April 2026 01:01:34 +0000 (0:00:05.675) 0:02:21.167 ******** 2026-04-09 01:01:48.222325 | orchestrator | changed: [testbed-manager] 2026-04-09 01:01:48.222329 | orchestrator | 2026-04-09 01:01:48.222333 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-09 01:01:48.222337 | orchestrator | Thursday 09 April 2026 01:01:40 +0000 (0:00:05.174) 0:02:26.342 ******** 2026-04-09 01:01:48.222341 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:01:48.222344 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:01:48.222348 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:01:48.222352 | orchestrator | 2026-04-09 01:01:48.222356 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:01:48.222360 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-04-09 01:01:48.222365 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 01:01:48.222372 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 01:01:48.222376 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 01:01:48.222380 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:01:48.222384 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:01:48.222388 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:01:48.222392 | orchestrator | 2026-04-09 01:01:48.222395 | orchestrator | 2026-04-09 01:01:48.222399 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:01:48.222403 | orchestrator | Thursday 09 April 2026 01:01:47 +0000 (0:00:07.036) 0:02:33.378 ******** 2026-04-09 01:01:48.222407 | orchestrator | =============================================================================== 2026-04-09 01:01:48.222411 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.56s 2026-04-09 01:01:48.222415 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.16s 2026-04-09 01:01:48.222420 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.43s 2026-04-09 01:01:48.222437 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.16s 2026-04-09 01:01:48.222444 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.88s 2026-04-09 01:01:48.222449 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.13s 2026-04-09 01:01:48.222456 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.29s 2026-04-09 01:01:48.222517 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.04s 2026-04-09 01:01:48.222529 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.19s 2026-04-09 01:01:48.222533 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.68s 2026-04-09 01:01:48.222537 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.34s 2026-04-09 01:01:48.222544 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.17s 2026-04-09 01:01:48.222548 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.44s 2026-04-09 01:01:48.222552 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.73s 2026-04-09 01:01:48.222556 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.06s 2026-04-09 01:01:48.222559 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.29s 2026-04-09 01:01:48.222563 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.28s 2026-04-09 01:01:48.222567 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.02s 2026-04-09 01:01:48.222571 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.90s 2026-04-09 01:01:48.222574 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.89s 2026-04-09 01:01:51.247019 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:51.248097 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:01:51.248460 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:51.249153 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:51.249202 | orchestrator | 2026-04-09 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:54.274723 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:54.277620 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:01:54.279172 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:54.281251 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:54.281689 | orchestrator | 2026-04-09 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:57.303884 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:01:57.304324 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:01:57.305048 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:01:57.305857 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:01:57.305893 | orchestrator | 2026-04-09 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:00.339709 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:00.341901 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:00.343788 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:00.345513 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:00.345804 | orchestrator | 2026-04-09 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:03.387217 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:03.389197 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:03.392730 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:03.393693 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:03.393732 | orchestrator | 2026-04-09 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:06.439510 | orchestrator | 2026-04-09 01:02:06 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:06.442527 | orchestrator | 2026-04-09 01:02:06 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:06.444260 | orchestrator | 2026-04-09 01:02:06 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:06.446461 | orchestrator | 2026-04-09 01:02:06 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:06.446527 | orchestrator | 2026-04-09 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:09.484179 | orchestrator | 2026-04-09 01:02:09 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:09.486854 | orchestrator | 2026-04-09 01:02:09 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:09.486916 | orchestrator | 2026-04-09 01:02:09 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:09.489621 | orchestrator | 2026-04-09 01:02:09 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:09.489680 | orchestrator | 2026-04-09 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:12.535937 | orchestrator | 2026-04-09 01:02:12 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:12.538135 | orchestrator | 2026-04-09 01:02:12 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:12.539919 | orchestrator | 2026-04-09 01:02:12 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:12.542441 | orchestrator | 2026-04-09 01:02:12 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:12.542480 | orchestrator | 2026-04-09 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:15.598975 | orchestrator | 2026-04-09 01:02:15 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:15.600615 | orchestrator | 2026-04-09 01:02:15 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:15.602454 | orchestrator | 2026-04-09 01:02:15 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:15.604605 | orchestrator | 2026-04-09 01:02:15 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:15.604862 | orchestrator | 2026-04-09 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:18.646205 | orchestrator | 2026-04-09 01:02:18 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:18.649185 | orchestrator | 2026-04-09 01:02:18 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:18.651942 | orchestrator | 2026-04-09 01:02:18 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:18.653104 | orchestrator | 2026-04-09 01:02:18 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:18.653156 | orchestrator | 2026-04-09 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:21.681064 | orchestrator | 2026-04-09 01:02:21 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:21.682180 | orchestrator | 2026-04-09 01:02:21 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:21.685152 | orchestrator | 2026-04-09 01:02:21 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:21.685205 | orchestrator | 2026-04-09 01:02:21 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:21.685211 | orchestrator | 2026-04-09 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:24.713630 | orchestrator | 2026-04-09 01:02:24 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:24.716260 | orchestrator | 2026-04-09 01:02:24 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:24.718433 | orchestrator | 2026-04-09 01:02:24 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:24.720207 | orchestrator | 2026-04-09 01:02:24 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:24.720462 | orchestrator | 2026-04-09 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:27.765005 | orchestrator | 2026-04-09 01:02:27 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:27.766146 | orchestrator | 2026-04-09 01:02:27 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:27.766784 | orchestrator | 2026-04-09 01:02:27 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:27.767592 | orchestrator | 2026-04-09 01:02:27 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:27.767666 | orchestrator | 2026-04-09 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:30.794666 | orchestrator | 2026-04-09 01:02:30 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:30.794940 | orchestrator | 2026-04-09 01:02:30 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:30.795762 | orchestrator | 2026-04-09 01:02:30 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:30.796622 | orchestrator | 2026-04-09 01:02:30 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:30.796672 | orchestrator | 2026-04-09 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:33.832025 | orchestrator | 2026-04-09 01:02:33 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:33.832112 | orchestrator | 2026-04-09 01:02:33 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:33.832792 | orchestrator | 2026-04-09 01:02:33 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:33.835146 | orchestrator | 2026-04-09 01:02:33 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:33.835197 | orchestrator | 2026-04-09 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:36.861105 | orchestrator | 2026-04-09 01:02:36 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:36.861533 | orchestrator | 2026-04-09 01:02:36 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:36.862224 | orchestrator | 2026-04-09 01:02:36 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:36.863249 | orchestrator | 2026-04-09 01:02:36 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:36.863316 | orchestrator | 2026-04-09 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:39.892040 | orchestrator | 2026-04-09 01:02:39 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:39.894498 | orchestrator | 2026-04-09 01:02:39 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:39.895992 | orchestrator | 2026-04-09 01:02:39 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:39.897233 | orchestrator | 2026-04-09 01:02:39 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:39.897267 | orchestrator | 2026-04-09 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:42.934814 | orchestrator | 2026-04-09 01:02:42 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:42.936785 | orchestrator | 2026-04-09 01:02:42 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:42.938827 | orchestrator | 2026-04-09 01:02:42 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:42.941777 | orchestrator | 2026-04-09 01:02:42 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:42.941843 | orchestrator | 2026-04-09 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:45.978582 | orchestrator | 2026-04-09 01:02:45 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:45.980994 | orchestrator | 2026-04-09 01:02:45 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:45.984302 | orchestrator | 2026-04-09 01:02:45 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:45.986250 | orchestrator | 2026-04-09 01:02:45 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:45.986385 | orchestrator | 2026-04-09 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:49.053329 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:49.055644 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:49.058012 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:49.059816 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:49.059864 | orchestrator | 2026-04-09 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:52.095729 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state STARTED 2026-04-09 01:02:52.097175 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:52.099300 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:52.100656 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:52.100709 | orchestrator | 2026-04-09 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:55.129077 | orchestrator | 2026-04-09 01:02:55.129125 | orchestrator | 2026-04-09 01:02:55.129132 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:02:55.129137 | orchestrator | 2026-04-09 01:02:55.129141 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:02:55.129158 | orchestrator | Thursday 09 April 2026 01:00:06 +0000 (0:00:00.326) 0:00:00.326 ******** 2026-04-09 01:02:55.129162 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:02:55.129167 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:02:55.129170 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:02:55.129174 | orchestrator | 2026-04-09 01:02:55.129178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:02:55.129182 | orchestrator | Thursday 09 April 2026 01:00:06 +0000 (0:00:00.223) 0:00:00.550 ******** 2026-04-09 01:02:55.129186 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-09 01:02:55.129190 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-09 01:02:55.129195 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-09 01:02:55.129199 | orchestrator | 2026-04-09 01:02:55.129203 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-09 01:02:55.129206 | orchestrator | 2026-04-09 01:02:55.129210 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:55.129214 | orchestrator | Thursday 09 April 2026 01:00:06 +0000 (0:00:00.251) 0:00:00.801 ******** 2026-04-09 01:02:55.129218 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:55.129222 | orchestrator | 2026-04-09 01:02:55.129226 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-04-09 01:02:55.129230 | orchestrator | Thursday 09 April 2026 01:00:07 +0000 (0:00:00.497) 0:00:01.298 ******** 2026-04-09 01:02:55.129234 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-09 01:02:55.129238 | orchestrator | 2026-04-09 01:02:55.129241 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-04-09 01:02:55.129245 | orchestrator | Thursday 09 April 2026 01:00:11 +0000 (0:00:04.401) 0:00:05.700 ******** 2026-04-09 01:02:55.129347 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-09 01:02:55.129354 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-09 01:02:55.129409 | orchestrator | 2026-04-09 01:02:55.129414 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-09 01:02:55.129418 | orchestrator | Thursday 09 April 2026 01:00:19 +0000 (0:00:08.287) 0:00:13.987 ******** 2026-04-09 01:02:55.129422 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:02:55.129426 | orchestrator | 2026-04-09 01:02:55.129431 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-09 01:02:55.129434 | orchestrator | Thursday 09 April 2026 01:00:23 +0000 (0:00:04.088) 0:00:18.076 ******** 2026-04-09 01:02:55.129438 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-09 01:02:55.129442 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:02:55.129446 | orchestrator | 2026-04-09 01:02:55.129450 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-09 01:02:55.129454 | orchestrator | Thursday 09 April 2026 01:00:28 +0000 (0:00:04.680) 0:00:22.756 ******** 2026-04-09 01:02:55.129458 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:02:55.129462 | orchestrator | 2026-04-09 01:02:55.129465 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-04-09 01:02:55.129469 | orchestrator | Thursday 09 April 2026 01:00:32 +0000 (0:00:03.966) 0:00:26.723 ******** 2026-04-09 01:02:55.129473 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-09 01:02:55.129477 | orchestrator | 2026-04-09 01:02:55.129481 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-09 01:02:55.129484 | orchestrator | Thursday 09 April 2026 01:00:37 +0000 (0:00:04.692) 0:00:31.415 ******** 2026-04-09 01:02:55.129499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129524 | orchestrator | 2026-04-09 01:02:55.129528 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:55.129531 | orchestrator | Thursday 09 April 2026 01:00:41 +0000 (0:00:04.333) 0:00:35.749 ******** 2026-04-09 01:02:55.129540 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:55.129544 | orchestrator | 2026-04-09 01:02:55.129549 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-09 01:02:55.129555 | orchestrator | Thursday 09 April 2026 01:00:42 +0000 (0:00:00.571) 0:00:36.320 ******** 2026-04-09 01:02:55.129561 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.129567 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.129574 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.129579 | orchestrator | 2026-04-09 01:02:55.129585 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-09 01:02:55.129591 | orchestrator | Thursday 09 April 2026 01:00:45 +0000 (0:00:02.983) 0:00:39.304 ******** 2026-04-09 01:02:55.129598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 01:02:55.129613 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 01:02:55.129626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 01:02:55.129682 | orchestrator | 2026-04-09 01:02:55.129688 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-09 01:02:55.129692 | orchestrator | Thursday 09 April 2026 01:00:46 +0000 (0:00:01.602) 0:00:40.906 ******** 2026-04-09 01:02:55.129696 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 01:02:55.129700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 01:02:55.129704 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-09 01:02:55.129707 | orchestrator | 2026-04-09 01:02:55.129711 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-09 01:02:55.129715 | orchestrator | Thursday 09 April 2026 01:00:48 +0000 (0:00:01.414) 0:00:42.321 ******** 2026-04-09 01:02:55.129719 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:02:55.129723 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:02:55.129727 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:02:55.129730 | orchestrator | 2026-04-09 01:02:55.129734 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-09 01:02:55.129738 | orchestrator | Thursday 09 April 2026 01:00:48 +0000 (0:00:00.645) 0:00:42.966 ******** 2026-04-09 01:02:55.129746 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.129750 | orchestrator | 2026-04-09 01:02:55.129755 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-09 01:02:55.129759 | orchestrator | Thursday 09 April 2026 01:00:48 +0000 (0:00:00.141) 0:00:43.108 ******** 2026-04-09 01:02:55.129763 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.129767 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.129770 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.129774 | orchestrator | 2026-04-09 01:02:55.129778 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:55.129782 | orchestrator | Thursday 09 April 2026 01:00:49 +0000 (0:00:00.250) 0:00:43.359 ******** 2026-04-09 01:02:55.129785 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:55.129789 | orchestrator | 2026-04-09 01:02:55.129793 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-09 01:02:55.129797 | orchestrator | Thursday 09 April 2026 01:00:49 +0000 (0:00:00.582) 0:00:43.941 ******** 2026-04-09 01:02:55.129807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129824 | orchestrator | 2026-04-09 01:02:55.129828 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-09 01:02:55.129831 | orchestrator | Thursday 09 April 2026 01:00:53 +0000 (0:00:03.277) 0:00:47.219 ******** 2026-04-09 01:02:55.129839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.129846 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.129851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.129855 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.129863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.129867 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.129871 | orchestrator | 2026-04-09 01:02:55.129880 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-09 01:02:55.129884 | orchestrator | Thursday 09 April 2026 01:00:55 +0000 (0:00:02.484) 0:00:49.703 ******** 2026-04-09 01:02:55.129888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.129892 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.129899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.129903 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.129907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.129914 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.129918 | orchestrator | 2026-04-09 01:02:55.129922 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-09 01:02:55.129926 | orchestrator | Thursday 09 April 2026 01:00:59 +0000 (0:00:04.282) 0:00:53.986 ******** 2026-04-09 01:02:55.129929 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.129933 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.129937 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.129941 | orchestrator | 2026-04-09 01:02:55.129945 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-09 01:02:55.129949 | orchestrator | Thursday 09 April 2026 01:01:02 +0000 (0:00:02.439) 0:00:56.425 ******** 2026-04-09 01:02:55.129956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.129972 | orchestrator | 2026-04-09 01:02:55.129976 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-09 01:02:55.129982 | orchestrator | Thur2026-04-09 01:02:55 | INFO  | Task e5fe0822-d4c4-45de-a5fe-8f643bc788ac is in state SUCCESS 2026-04-09 01:02:55.129986 | orchestrator | sday 09 April 2026 01:01:05 +0000 (0:00:03.121) 0:00:59.546 ******** 2026-04-09 01:02:55.129990 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.129994 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.129998 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.130004 | orchestrator | 2026-04-09 01:02:55.130008 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-09 01:02:55.130012 | orchestrator | Thursday 09 April 2026 01:01:12 +0000 (0:00:06.631) 0:01:06.178 ******** 2026-04-09 01:02:55.130042 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130045 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130049 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130053 | orchestrator | 2026-04-09 01:02:55.130057 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-09 01:02:55.130061 | orchestrator | Thursday 09 April 2026 01:01:15 +0000 (0:00:03.349) 0:01:09.528 ******** 2026-04-09 01:02:55.130064 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130068 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130072 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130076 | orchestrator | 2026-04-09 01:02:55.130080 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-09 01:02:55.130083 | orchestrator | Thursday 09 April 2026 01:01:17 +0000 (0:00:02.525) 0:01:12.054 ******** 2026-04-09 01:02:55.130087 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130091 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130095 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130099 | orchestrator | 2026-04-09 01:02:55.130103 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-09 01:02:55.130106 | orchestrator | Thursday 09 April 2026 01:01:20 +0000 (0:00:02.870) 0:01:14.925 ******** 2026-04-09 01:02:55.130110 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130114 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130118 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130121 | orchestrator | 2026-04-09 01:02:55.130125 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-09 01:02:55.130129 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.278) 0:01:15.204 ******** 2026-04-09 01:02:55.130133 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 01:02:55.130137 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130141 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 01:02:55.130145 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130149 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 01:02:55.130153 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130156 | orchestrator | 2026-04-09 01:02:55.130160 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-09 01:02:55.130164 | orchestrator | Thursday 09 April 2026 01:01:24 +0000 (0:00:03.065) 0:01:18.269 ******** 2026-04-09 01:02:55.130168 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130171 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130175 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130179 | orchestrator | 2026-04-09 01:02:55.130183 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-09 01:02:55.130186 | orchestrator | Thursday 09 April 2026 01:01:27 +0000 (0:00:03.326) 0:01:21.596 ******** 2026-04-09 01:02:55.130190 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130194 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130198 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130201 | orchestrator | 2026-04-09 01:02:55.130205 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-09 01:02:55.130209 | orchestrator | Thursday 09 April 2026 01:01:31 +0000 (0:00:04.445) 0:01:26.041 ******** 2026-04-09 01:02:55.130216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.130224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.130228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:55.130237 | orchestrator | 2026-04-09 01:02:55.130241 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-09 01:02:55.130246 | orchestrator | Thursday 09 April 2026 01:01:35 +0000 (0:00:03.235) 0:01:29.277 ******** 2026-04-09 01:02:55.130252 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:02:55.130258 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:02:55.130264 | orchestrator | } 2026-04-09 01:02:55.130272 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:02:55.130278 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:02:55.130284 | orchestrator | } 2026-04-09 01:02:55.130290 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:02:55.130296 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:02:55.130302 | orchestrator | } 2026-04-09 01:02:55.130309 | orchestrator | 2026-04-09 01:02:55.130315 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:02:55.130419 | orchestrator | Thursday 09 April 2026 01:01:35 +0000 (0:00:00.384) 0:01:29.661 ******** 2026-04-09 01:02:55.130428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.130434 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.130449 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:55.130465 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130469 | orchestrator | 2026-04-09 01:02:55.130474 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:55.130479 | orchestrator | Thursday 09 April 2026 01:01:38 +0000 (0:00:03.071) 0:01:32.732 ******** 2026-04-09 01:02:55.130483 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.130488 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.130492 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.130497 | orchestrator | 2026-04-09 01:02:55.130502 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-09 01:02:55.130505 | orchestrator | Thursday 09 April 2026 01:01:38 +0000 (0:00:00.327) 0:01:33.060 ******** 2026-04-09 01:02:55.130509 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.130519 | orchestrator | 2026-04-09 01:02:55.130523 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-09 01:02:55.130527 | orchestrator | Thursday 09 April 2026 01:01:41 +0000 (0:00:02.738) 0:01:35.798 ******** 2026-04-09 01:02:55.130530 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.130534 | orchestrator | 2026-04-09 01:02:55.130538 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-09 01:02:55.130542 | orchestrator | Thursday 09 April 2026 01:01:44 +0000 (0:00:03.202) 0:01:39.001 ******** 2026-04-09 01:02:55.130546 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.130549 | orchestrator | 2026-04-09 01:02:55.130553 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-09 01:02:55.130557 | orchestrator | Thursday 09 April 2026 01:01:47 +0000 (0:00:02.975) 0:01:41.976 ******** 2026-04-09 01:02:55.130561 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.130564 | orchestrator | 2026-04-09 01:02:55.130568 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-09 01:02:55.130572 | orchestrator | Thursday 09 April 2026 01:02:16 +0000 (0:00:28.553) 0:02:10.529 ******** 2026-04-09 01:02:55.130576 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.130580 | orchestrator | 2026-04-09 01:02:55.130584 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 01:02:55.130587 | orchestrator | Thursday 09 April 2026 01:02:18 +0000 (0:00:02.364) 0:02:12.894 ******** 2026-04-09 01:02:55.130591 | orchestrator | 2026-04-09 01:02:55.130595 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 01:02:55.130599 | orchestrator | Thursday 09 April 2026 01:02:18 +0000 (0:00:00.058) 0:02:12.953 ******** 2026-04-09 01:02:55.130603 | orchestrator | 2026-04-09 01:02:55.130606 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 01:02:55.130610 | orchestrator | Thursday 09 April 2026 01:02:18 +0000 (0:00:00.075) 0:02:13.028 ******** 2026-04-09 01:02:55.130614 | orchestrator | 2026-04-09 01:02:55.130618 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-09 01:02:55.130621 | orchestrator | Thursday 09 April 2026 01:02:18 +0000 (0:00:00.062) 0:02:13.090 ******** 2026-04-09 01:02:55.130625 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.130629 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.130633 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.130637 | orchestrator | 2026-04-09 01:02:55.130640 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:02:55.130645 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-09 01:02:55.130649 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:02:55.130655 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:02:55.130659 | orchestrator | 2026-04-09 01:02:55.130663 | orchestrator | 2026-04-09 01:02:55.130669 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:02:55.130675 | orchestrator | Thursday 09 April 2026 01:02:53 +0000 (0:00:34.243) 0:02:47.334 ******** 2026-04-09 01:02:55.130681 | orchestrator | =============================================================================== 2026-04-09 01:02:55.130687 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.24s 2026-04-09 01:02:55.130693 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.55s 2026-04-09 01:02:55.130698 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 8.29s 2026-04-09 01:02:55.130704 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.63s 2026-04-09 01:02:55.130710 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.69s 2026-04-09 01:02:55.130720 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.68s 2026-04-09 01:02:55.130726 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.45s 2026-04-09 01:02:55.130733 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 4.40s 2026-04-09 01:02:55.130739 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.33s 2026-04-09 01:02:55.130745 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.28s 2026-04-09 01:02:55.130751 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.09s 2026-04-09 01:02:55.130757 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.97s 2026-04-09 01:02:55.130763 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.35s 2026-04-09 01:02:55.130770 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.33s 2026-04-09 01:02:55.130776 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.28s 2026-04-09 01:02:55.130782 | orchestrator | service-check-containers : glance | Check containers -------------------- 3.24s 2026-04-09 01:02:55.130787 | orchestrator | glance : Creating Glance database user and setting permissions ---------- 3.20s 2026-04-09 01:02:55.130791 | orchestrator | glance : Copying over config.json files for services -------------------- 3.12s 2026-04-09 01:02:55.130795 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.07s 2026-04-09 01:02:55.130799 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.07s 2026-04-09 01:02:55.130803 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:55.130807 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:55.130811 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:55.130815 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:02:55.130819 | orchestrator | 2026-04-09 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:58.179622 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:02:58.179860 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:02:58.181643 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:02:58.182185 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:02:58.182214 | orchestrator | 2026-04-09 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:01.208249 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:01.208618 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:01.212247 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:01.212653 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:01.212688 | orchestrator | 2026-04-09 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:04.237206 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:04.238070 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:04.239502 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:04.240996 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:04.241019 | orchestrator | 2026-04-09 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:07.264085 | orchestrator | 2026-04-09 01:03:07 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:07.264166 | orchestrator | 2026-04-09 01:03:07 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:07.264676 | orchestrator | 2026-04-09 01:03:07 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:07.265301 | orchestrator | 2026-04-09 01:03:07 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:07.265387 | orchestrator | 2026-04-09 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:10.289297 | orchestrator | 2026-04-09 01:03:10 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:10.291299 | orchestrator | 2026-04-09 01:03:10 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:10.291862 | orchestrator | 2026-04-09 01:03:10 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:10.292538 | orchestrator | 2026-04-09 01:03:10 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:10.292589 | orchestrator | 2026-04-09 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:13.322103 | orchestrator | 2026-04-09 01:03:13 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:13.323223 | orchestrator | 2026-04-09 01:03:13 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:13.324594 | orchestrator | 2026-04-09 01:03:13 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:13.326098 | orchestrator | 2026-04-09 01:03:13 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:13.326135 | orchestrator | 2026-04-09 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:16.354433 | orchestrator | 2026-04-09 01:03:16 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:16.355051 | orchestrator | 2026-04-09 01:03:16 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:16.357112 | orchestrator | 2026-04-09 01:03:16 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:16.360448 | orchestrator | 2026-04-09 01:03:16 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:16.360499 | orchestrator | 2026-04-09 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:19.399609 | orchestrator | 2026-04-09 01:03:19 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:19.399927 | orchestrator | 2026-04-09 01:03:19 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:19.401519 | orchestrator | 2026-04-09 01:03:19 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:19.402922 | orchestrator | 2026-04-09 01:03:19 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:19.402962 | orchestrator | 2026-04-09 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:22.441424 | orchestrator | 2026-04-09 01:03:22 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:22.441807 | orchestrator | 2026-04-09 01:03:22 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:22.442664 | orchestrator | 2026-04-09 01:03:22 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:22.443288 | orchestrator | 2026-04-09 01:03:22 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:22.443320 | orchestrator | 2026-04-09 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:25.476887 | orchestrator | 2026-04-09 01:03:25 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:25.478008 | orchestrator | 2026-04-09 01:03:25 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:25.478766 | orchestrator | 2026-04-09 01:03:25 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:25.479584 | orchestrator | 2026-04-09 01:03:25 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:25.479644 | orchestrator | 2026-04-09 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:28.519232 | orchestrator | 2026-04-09 01:03:28 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:28.519874 | orchestrator | 2026-04-09 01:03:28 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:28.520672 | orchestrator | 2026-04-09 01:03:28 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:28.521436 | orchestrator | 2026-04-09 01:03:28 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:28.521448 | orchestrator | 2026-04-09 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:31.552874 | orchestrator | 2026-04-09 01:03:31 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:31.553273 | orchestrator | 2026-04-09 01:03:31 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:31.554293 | orchestrator | 2026-04-09 01:03:31 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:31.554998 | orchestrator | 2026-04-09 01:03:31 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:31.555033 | orchestrator | 2026-04-09 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:34.582225 | orchestrator | 2026-04-09 01:03:34 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:34.583993 | orchestrator | 2026-04-09 01:03:34 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:34.585249 | orchestrator | 2026-04-09 01:03:34 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:34.586649 | orchestrator | 2026-04-09 01:03:34 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:34.587286 | orchestrator | 2026-04-09 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:37.618496 | orchestrator | 2026-04-09 01:03:37 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:37.619076 | orchestrator | 2026-04-09 01:03:37 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:37.619897 | orchestrator | 2026-04-09 01:03:37 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:37.620841 | orchestrator | 2026-04-09 01:03:37 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:37.620889 | orchestrator | 2026-04-09 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:40.648728 | orchestrator | 2026-04-09 01:03:40 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:40.650074 | orchestrator | 2026-04-09 01:03:40 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state STARTED 2026-04-09 01:03:40.651558 | orchestrator | 2026-04-09 01:03:40 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:40.652899 | orchestrator | 2026-04-09 01:03:40 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:40.652946 | orchestrator | 2026-04-09 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:43.686454 | orchestrator | 2026-04-09 01:03:43 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:43.687720 | orchestrator | 2026-04-09 01:03:43 | INFO  | Task b12aba14-5904-4404-9ea2-c0bced460e5e is in state SUCCESS 2026-04-09 01:03:43.688880 | orchestrator | 2026-04-09 01:03:43.688935 | orchestrator | 2026-04-09 01:03:43.688945 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:03:43.688955 | orchestrator | 2026-04-09 01:03:43.688963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:03:43.688971 | orchestrator | Thursday 09 April 2026 01:00:28 +0000 (0:00:00.399) 0:00:00.399 ******** 2026-04-09 01:03:43.688975 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:03:43.688980 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:03:43.688984 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:03:43.688988 | orchestrator | 2026-04-09 01:03:43.688992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:03:43.688996 | orchestrator | Thursday 09 April 2026 01:00:28 +0000 (0:00:00.336) 0:00:00.736 ******** 2026-04-09 01:03:43.689000 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-09 01:03:43.689005 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-09 01:03:43.689009 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-09 01:03:43.689013 | orchestrator | 2026-04-09 01:03:43.689017 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-09 01:03:43.689021 | orchestrator | 2026-04-09 01:03:43.689024 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:03:43.689028 | orchestrator | Thursday 09 April 2026 01:00:28 +0000 (0:00:00.327) 0:00:01.063 ******** 2026-04-09 01:03:43.689032 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:03:43.689037 | orchestrator | 2026-04-09 01:03:43.689041 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-04-09 01:03:43.689045 | orchestrator | Thursday 09 April 2026 01:00:29 +0000 (0:00:00.677) 0:00:01.741 ******** 2026-04-09 01:03:43.689049 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-04-09 01:03:43.689053 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-09 01:03:43.689057 | orchestrator | 2026-04-09 01:03:43.689061 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-04-09 01:03:43.689065 | orchestrator | Thursday 09 April 2026 01:00:37 +0000 (0:00:08.189) 0:00:09.931 ******** 2026-04-09 01:03:43.689069 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-04-09 01:03:43.689073 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-04-09 01:03:43.689076 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-09 01:03:43.689081 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-09 01:03:43.689109 | orchestrator | 2026-04-09 01:03:43.689115 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-09 01:03:43.689121 | orchestrator | Thursday 09 April 2026 01:00:53 +0000 (0:00:15.455) 0:00:25.386 ******** 2026-04-09 01:03:43.689127 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:03:43.689133 | orchestrator | 2026-04-09 01:03:43.689139 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-09 01:03:43.689145 | orchestrator | Thursday 09 April 2026 01:00:57 +0000 (0:00:03.791) 0:00:29.178 ******** 2026-04-09 01:03:43.689151 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-09 01:03:43.689158 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:03:43.689164 | orchestrator | 2026-04-09 01:03:43.689170 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-09 01:03:43.689176 | orchestrator | Thursday 09 April 2026 01:01:01 +0000 (0:00:04.367) 0:00:33.545 ******** 2026-04-09 01:03:43.689183 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:03:43.689293 | orchestrator | 2026-04-09 01:03:43.689301 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-04-09 01:03:43.689323 | orchestrator | Thursday 09 April 2026 01:01:05 +0000 (0:00:03.887) 0:00:37.433 ******** 2026-04-09 01:03:43.689330 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-09 01:03:43.689337 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-09 01:03:43.689342 | orchestrator | 2026-04-09 01:03:43.689348 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-09 01:03:43.689354 | orchestrator | Thursday 09 April 2026 01:01:14 +0000 (0:00:08.656) 0:00:46.089 ******** 2026-04-09 01:03:43.689382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.689392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.689401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.689428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.689556 | orchestrator | 2026-04-09 01:03:43.689562 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:03:43.689569 | orchestrator | Thursday 09 April 2026 01:01:17 +0000 (0:00:03.237) 0:00:49.327 ******** 2026-04-09 01:03:43.689575 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.689581 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.689588 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.689593 | orchestrator | 2026-04-09 01:03:43.689599 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:03:43.689604 | orchestrator | Thursday 09 April 2026 01:01:17 +0000 (0:00:00.260) 0:00:49.587 ******** 2026-04-09 01:03:43.689611 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:03:43.689617 | orchestrator | 2026-04-09 01:03:43.689623 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-09 01:03:43.689637 | orchestrator | Thursday 09 April 2026 01:01:18 +0000 (0:00:00.479) 0:00:50.067 ******** 2026-04-09 01:03:43.689643 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-09 01:03:43.689650 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-09 01:03:43.689656 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-09 01:03:43.689663 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-09 01:03:43.689668 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-09 01:03:43.689675 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-09 01:03:43.689681 | orchestrator | 2026-04-09 01:03:43.689687 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-09 01:03:43.689695 | orchestrator | Thursday 09 April 2026 01:01:20 +0000 (0:00:02.088) 0:00:52.155 ******** 2026-04-09 01:03:43.689703 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 01:03:43.689712 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 01:03:43.689748 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 01:03:43.689757 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 01:03:43.689770 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 01:03:43.689777 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 01:03:43.689785 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 01:03:43.689798 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 01:03:43.689838 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 01:03:43.689845 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 01:03:43.689853 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-09 01:03:43.689867 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-09 01:03:43.689881 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 01:03:43.689889 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 01:03:43.689896 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 01:03:43.689917 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 01:03:43.689930 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 01:03:43.689947 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 01:03:43.689955 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 01:03:43.689962 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 01:03:43.689989 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-09 01:03:43.690536 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 01:03:43.690573 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 01:03:43.690578 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-09 01:03:43.690582 | orchestrator | 2026-04-09 01:03:43.690587 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-09 01:03:43.690595 | orchestrator | Thursday 09 April 2026 01:01:26 +0000 (0:00:05.927) 0:00:58.085 ******** 2026-04-09 01:03:43.690603 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 01:03:43.690611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 01:03:43.690617 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 01:03:43.690623 | orchestrator | 2026-04-09 01:03:43.690630 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-09 01:03:43.690636 | orchestrator | Thursday 09 April 2026 01:01:28 +0000 (0:00:02.320) 0:01:00.405 ******** 2026-04-09 01:03:43.690642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 01:03:43.690649 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 01:03:43.690656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-09 01:03:43.690678 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-09 01:03:43.690686 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-09 01:03:43.690691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-09 01:03:43.690697 | orchestrator | 2026-04-09 01:03:43.690703 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-09 01:03:43.690709 | orchestrator | Thursday 09 April 2026 01:01:31 +0000 (0:00:03.569) 0:01:03.974 ******** 2026-04-09 01:03:43.690740 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-09 01:03:43.690747 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-09 01:03:43.690753 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-09 01:03:43.690759 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-09 01:03:43.690767 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-09 01:03:43.690771 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-09 01:03:43.690775 | orchestrator | 2026-04-09 01:03:43.690779 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-09 01:03:43.690783 | orchestrator | Thursday 09 April 2026 01:01:33 +0000 (0:00:01.257) 0:01:05.231 ******** 2026-04-09 01:03:43.690787 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.690791 | orchestrator | 2026-04-09 01:03:43.690795 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-09 01:03:43.690799 | orchestrator | Thursday 09 April 2026 01:01:33 +0000 (0:00:00.274) 0:01:05.506 ******** 2026-04-09 01:03:43.690803 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.690807 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.690811 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.690815 | orchestrator | 2026-04-09 01:03:43.690819 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:03:43.690823 | orchestrator | Thursday 09 April 2026 01:01:33 +0000 (0:00:00.317) 0:01:05.824 ******** 2026-04-09 01:03:43.690827 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:03:43.690831 | orchestrator | 2026-04-09 01:03:43.690835 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-09 01:03:43.690839 | orchestrator | Thursday 09 April 2026 01:01:34 +0000 (0:00:00.506) 0:01:06.330 ******** 2026-04-09 01:03:43.690844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.690849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.690874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.690880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.690941 | orchestrator | 2026-04-09 01:03:43.690945 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-09 01:03:43.690949 | orchestrator | Thursday 09 April 2026 01:01:38 +0000 (0:00:04.214) 0:01:10.545 ******** 2026-04-09 01:03:43.690954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.690958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.690973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.690981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.690987 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.690998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691050 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.691057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691087 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.691094 | orchestrator | 2026-04-09 01:03:43.691099 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-09 01:03:43.691105 | orchestrator | Thursday 09 April 2026 01:01:39 +0000 (0:00:01.075) 0:01:11.621 ******** 2026-04-09 01:03:43.691117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691139 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.691144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691173 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.691178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691201 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.691205 | orchestrator | 2026-04-09 01:03:43.691210 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-09 01:03:43.691214 | orchestrator | Thursday 09 April 2026 01:01:40 +0000 (0:00:01.342) 0:01:12.964 ******** 2026-04-09 01:03:43.691219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691298 | orchestrator | 2026-04-09 01:03:43.691302 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-09 01:03:43.691307 | orchestrator | Thursday 09 April 2026 01:01:46 +0000 (0:00:05.891) 0:01:18.855 ******** 2026-04-09 01:03:43.691401 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-09 01:03:43.691408 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.691413 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-09 01:03:43.691418 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.691422 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-09 01:03:43.691427 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.691432 | orchestrator | 2026-04-09 01:03:43.691437 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-09 01:03:43.691441 | orchestrator | Thursday 09 April 2026 01:01:47 +0000 (0:00:00.701) 0:01:19.556 ******** 2026-04-09 01:03:43.691447 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:03:43.691451 | orchestrator | 2026-04-09 01:03:43.691456 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-09 01:03:43.691460 | orchestrator | Thursday 09 April 2026 01:01:48 +0000 (0:00:00.915) 0:01:20.472 ******** 2026-04-09 01:03:43.691464 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.691468 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:03:43.691472 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:03:43.691475 | orchestrator | 2026-04-09 01:03:43.691479 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-09 01:03:43.691483 | orchestrator | Thursday 09 April 2026 01:01:50 +0000 (0:00:02.125) 0:01:22.597 ******** 2026-04-09 01:03:43.691491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691559 | orchestrator | 2026-04-09 01:03:43.691566 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-09 01:03:43.691570 | orchestrator | Thursday 09 April 2026 01:02:00 +0000 (0:00:10.017) 0:01:32.615 ******** 2026-04-09 01:03:43.691574 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.691578 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:03:43.691582 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:03:43.691586 | orchestrator | 2026-04-09 01:03:43.691593 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-09 01:03:43.691599 | orchestrator | Thursday 09 April 2026 01:02:01 +0000 (0:00:01.344) 0:01:33.960 ******** 2026-04-09 01:03:43.691605 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.691611 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:03:43.691617 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:03:43.691623 | orchestrator | 2026-04-09 01:03:43.691629 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-09 01:03:43.691635 | orchestrator | Thursday 09 April 2026 01:02:03 +0000 (0:00:01.412) 0:01:35.372 ******** 2026-04-09 01:03:43.691644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691678 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.691691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691718 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.691722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691750 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.691754 | orchestrator | 2026-04-09 01:03:43.691757 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-09 01:03:43.691761 | orchestrator | Thursday 09 April 2026 01:02:04 +0000 (0:00:01.009) 0:01:36.382 ******** 2026-04-09 01:03:43.691765 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.691769 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.691773 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.691776 | orchestrator | 2026-04-09 01:03:43.691780 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-09 01:03:43.691784 | orchestrator | Thursday 09 April 2026 01:02:04 +0000 (0:00:00.326) 0:01:36.709 ******** 2026-04-09 01:03:43.691788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:03:43.691809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:03:43.691861 | orchestrator | 2026-04-09 01:03:43.691865 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-09 01:03:43.691871 | orchestrator | Thursday 09 April 2026 01:02:08 +0000 (0:00:03.547) 0:01:40.256 ******** 2026-04-09 01:03:43.691877 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:03:43.691883 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:03:43.691889 | orchestrator | } 2026-04-09 01:03:43.691895 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:03:43.691902 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:03:43.691909 | orchestrator | } 2026-04-09 01:03:43.691913 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:03:43.691916 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:03:43.691920 | orchestrator | } 2026-04-09 01:03:43.691924 | orchestrator | 2026-04-09 01:03:43.691928 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:03:43.691931 | orchestrator | Thursday 09 April 2026 01:02:08 +0000 (0:00:00.311) 0:01:40.568 ******** 2026-04-09 01:03:43.691941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691963 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.691968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.691975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.691991 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.691995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:03:43.692000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.692007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.692011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:03:43.692015 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.692019 | orchestrator | 2026-04-09 01:03:43.692022 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:03:43.692026 | orchestrator | Thursday 09 April 2026 01:02:09 +0000 (0:00:01.166) 0:01:41.734 ******** 2026-04-09 01:03:43.692030 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.692034 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:03:43.692038 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:03:43.692042 | orchestrator | 2026-04-09 01:03:43.692046 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-09 01:03:43.692049 | orchestrator | Thursday 09 April 2026 01:02:09 +0000 (0:00:00.267) 0:01:42.002 ******** 2026-04-09 01:03:43.692057 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.692061 | orchestrator | 2026-04-09 01:03:43.692065 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-09 01:03:43.692068 | orchestrator | Thursday 09 April 2026 01:02:12 +0000 (0:00:02.254) 0:01:44.257 ******** 2026-04-09 01:03:43.692072 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.692076 | orchestrator | 2026-04-09 01:03:43.692080 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-09 01:03:43.692084 | orchestrator | Thursday 09 April 2026 01:02:14 +0000 (0:00:02.460) 0:01:46.717 ******** 2026-04-09 01:03:43.692088 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.692092 | orchestrator | 2026-04-09 01:03:43.692095 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 01:03:43.692099 | orchestrator | Thursday 09 April 2026 01:02:34 +0000 (0:00:20.018) 0:02:06.736 ******** 2026-04-09 01:03:43.692103 | orchestrator | 2026-04-09 01:03:43.692107 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 01:03:43.692111 | orchestrator | Thursday 09 April 2026 01:02:34 +0000 (0:00:00.103) 0:02:06.839 ******** 2026-04-09 01:03:43.692115 | orchestrator | 2026-04-09 01:03:43.692121 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 01:03:43.692127 | orchestrator | Thursday 09 April 2026 01:02:34 +0000 (0:00:00.108) 0:02:06.948 ******** 2026-04-09 01:03:43.692137 | orchestrator | 2026-04-09 01:03:43.692143 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-09 01:03:43.692149 | orchestrator | Thursday 09 April 2026 01:02:35 +0000 (0:00:00.175) 0:02:07.123 ******** 2026-04-09 01:03:43.692156 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.692161 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:03:43.692167 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:03:43.692173 | orchestrator | 2026-04-09 01:03:43.692178 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-09 01:03:43.692184 | orchestrator | Thursday 09 April 2026 01:02:56 +0000 (0:00:21.227) 0:02:28.350 ******** 2026-04-09 01:03:43.692189 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:03:43.692195 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:03:43.692200 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.692205 | orchestrator | 2026-04-09 01:03:43.692211 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-09 01:03:43.692217 | orchestrator | Thursday 09 April 2026 01:03:06 +0000 (0:00:09.992) 0:02:38.342 ******** 2026-04-09 01:03:43.692223 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.692229 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:03:43.692234 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:03:43.692240 | orchestrator | 2026-04-09 01:03:43.692245 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-09 01:03:43.692252 | orchestrator | Thursday 09 April 2026 01:03:29 +0000 (0:00:23.590) 0:03:01.933 ******** 2026-04-09 01:03:43.692258 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:03:43.692263 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:03:43.692269 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:03:43.692275 | orchestrator | 2026-04-09 01:03:43.692281 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-09 01:03:43.692287 | orchestrator | Thursday 09 April 2026 01:03:41 +0000 (0:00:12.107) 0:03:14.041 ******** 2026-04-09 01:03:43.692293 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:03:43.692298 | orchestrator | 2026-04-09 01:03:43.692304 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:03:43.692349 | orchestrator | testbed-node-0 : ok=33  changed=24  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 01:03:43.692358 | orchestrator | testbed-node-1 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:03:43.692376 | orchestrator | testbed-node-2 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:03:43.692383 | orchestrator | 2026-04-09 01:03:43.692389 | orchestrator | 2026-04-09 01:03:43.692395 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:03:43.692401 | orchestrator | Thursday 09 April 2026 01:03:42 +0000 (0:00:00.807) 0:03:14.848 ******** 2026-04-09 01:03:43.692407 | orchestrator | =============================================================================== 2026-04-09 01:03:43.692413 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.59s 2026-04-09 01:03:43.692419 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.23s 2026-04-09 01:03:43.692425 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.02s 2026-04-09 01:03:43.692432 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 15.46s 2026-04-09 01:03:43.692438 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.11s 2026-04-09 01:03:43.692444 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.02s 2026-04-09 01:03:43.692450 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.99s 2026-04-09 01:03:43.692457 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 8.66s 2026-04-09 01:03:43.692464 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 8.19s 2026-04-09 01:03:43.692470 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.93s 2026-04-09 01:03:43.692476 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.89s 2026-04-09 01:03:43.692482 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.37s 2026-04-09 01:03:43.692487 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.21s 2026-04-09 01:03:43.692494 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.89s 2026-04-09 01:03:43.692500 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.79s 2026-04-09 01:03:43.692506 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.57s 2026-04-09 01:03:43.692512 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.55s 2026-04-09 01:03:43.692519 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.24s 2026-04-09 01:03:43.692523 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.46s 2026-04-09 01:03:43.692527 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.32s 2026-04-09 01:03:43.692531 | orchestrator | 2026-04-09 01:03:43 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:43.692536 | orchestrator | 2026-04-09 01:03:43 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:43.692540 | orchestrator | 2026-04-09 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:46.710224 | orchestrator | 2026-04-09 01:03:46 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:46.710815 | orchestrator | 2026-04-09 01:03:46 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:03:46.711449 | orchestrator | 2026-04-09 01:03:46 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:46.712614 | orchestrator | 2026-04-09 01:03:46 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:46.712647 | orchestrator | 2026-04-09 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:49.764155 | orchestrator | 2026-04-09 01:03:49 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:49.766130 | orchestrator | 2026-04-09 01:03:49 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:03:49.767918 | orchestrator | 2026-04-09 01:03:49 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:49.769326 | orchestrator | 2026-04-09 01:03:49 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:49.769357 | orchestrator | 2026-04-09 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:52.794286 | orchestrator | 2026-04-09 01:03:52 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:52.795059 | orchestrator | 2026-04-09 01:03:52 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:03:52.795499 | orchestrator | 2026-04-09 01:03:52 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:52.797098 | orchestrator | 2026-04-09 01:03:52 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:52.797148 | orchestrator | 2026-04-09 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:55.835585 | orchestrator | 2026-04-09 01:03:55 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:55.838076 | orchestrator | 2026-04-09 01:03:55 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:03:55.840373 | orchestrator | 2026-04-09 01:03:55 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:55.843070 | orchestrator | 2026-04-09 01:03:55 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:55.843391 | orchestrator | 2026-04-09 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:58.882172 | orchestrator | 2026-04-09 01:03:58 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:03:58.883006 | orchestrator | 2026-04-09 01:03:58 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:03:58.884904 | orchestrator | 2026-04-09 01:03:58 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:03:58.886257 | orchestrator | 2026-04-09 01:03:58 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:03:58.886344 | orchestrator | 2026-04-09 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:01.920695 | orchestrator | 2026-04-09 01:04:01 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:01.922660 | orchestrator | 2026-04-09 01:04:01 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:01.923090 | orchestrator | 2026-04-09 01:04:01 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:01.924515 | orchestrator | 2026-04-09 01:04:01 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:01.924568 | orchestrator | 2026-04-09 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:04.966610 | orchestrator | 2026-04-09 01:04:04 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:04.969801 | orchestrator | 2026-04-09 01:04:04 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:04.971440 | orchestrator | 2026-04-09 01:04:04 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:04.973125 | orchestrator | 2026-04-09 01:04:04 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:04.973182 | orchestrator | 2026-04-09 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:08.016755 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:08.018686 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:08.020252 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:08.022505 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:08.022557 | orchestrator | 2026-04-09 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:11.066709 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:11.067843 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:11.069713 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:11.070651 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:11.070699 | orchestrator | 2026-04-09 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:14.097943 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:14.098577 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:14.099264 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:14.100397 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:14.100436 | orchestrator | 2026-04-09 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:17.137698 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:17.138709 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:17.139165 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:17.140901 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:17.140948 | orchestrator | 2026-04-09 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:20.177741 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:20.177831 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:20.178422 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:20.178906 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:20.178937 | orchestrator | 2026-04-09 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:23.210797 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:23.212155 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:23.214497 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:23.215509 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:23.215552 | orchestrator | 2026-04-09 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:26.252437 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:26.254048 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:26.255442 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:26.258177 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:26.258213 | orchestrator | 2026-04-09 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:29.285321 | orchestrator | 2026-04-09 01:04:29 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:29.286845 | orchestrator | 2026-04-09 01:04:29 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:29.286921 | orchestrator | 2026-04-09 01:04:29 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:29.286929 | orchestrator | 2026-04-09 01:04:29 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:29.286966 | orchestrator | 2026-04-09 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:32.320196 | orchestrator | 2026-04-09 01:04:32 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:32.320760 | orchestrator | 2026-04-09 01:04:32 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:32.321396 | orchestrator | 2026-04-09 01:04:32 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:32.322156 | orchestrator | 2026-04-09 01:04:32 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:32.322191 | orchestrator | 2026-04-09 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:35.348511 | orchestrator | 2026-04-09 01:04:35 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:35.350243 | orchestrator | 2026-04-09 01:04:35 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:35.352364 | orchestrator | 2026-04-09 01:04:35 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:35.353925 | orchestrator | 2026-04-09 01:04:35 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:35.353961 | orchestrator | 2026-04-09 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:38.379169 | orchestrator | 2026-04-09 01:04:38 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:38.379271 | orchestrator | 2026-04-09 01:04:38 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:38.379960 | orchestrator | 2026-04-09 01:04:38 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:38.380443 | orchestrator | 2026-04-09 01:04:38 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:38.380473 | orchestrator | 2026-04-09 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:41.401073 | orchestrator | 2026-04-09 01:04:41 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:41.401331 | orchestrator | 2026-04-09 01:04:41 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:41.402408 | orchestrator | 2026-04-09 01:04:41 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:41.403129 | orchestrator | 2026-04-09 01:04:41 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:41.403169 | orchestrator | 2026-04-09 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:44.427401 | orchestrator | 2026-04-09 01:04:44 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:44.428883 | orchestrator | 2026-04-09 01:04:44 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:44.429438 | orchestrator | 2026-04-09 01:04:44 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:44.430171 | orchestrator | 2026-04-09 01:04:44 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:44.430208 | orchestrator | 2026-04-09 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:47.459101 | orchestrator | 2026-04-09 01:04:47 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:47.459793 | orchestrator | 2026-04-09 01:04:47 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:47.460444 | orchestrator | 2026-04-09 01:04:47 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:47.461005 | orchestrator | 2026-04-09 01:04:47 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:47.461021 | orchestrator | 2026-04-09 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:50.484828 | orchestrator | 2026-04-09 01:04:50 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:50.485096 | orchestrator | 2026-04-09 01:04:50 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:50.485757 | orchestrator | 2026-04-09 01:04:50 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:50.486128 | orchestrator | 2026-04-09 01:04:50 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state STARTED 2026-04-09 01:04:50.486146 | orchestrator | 2026-04-09 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:53.506998 | orchestrator | 2026-04-09 01:04:53 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:04:53.507114 | orchestrator | 2026-04-09 01:04:53 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:53.507852 | orchestrator | 2026-04-09 01:04:53 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:53.508558 | orchestrator | 2026-04-09 01:04:53 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:53.509773 | orchestrator | 2026-04-09 01:04:53 | INFO  | Task 505dd941-3223-4b71-aba9-6d71b1cdfd39 is in state SUCCESS 2026-04-09 01:04:53.510874 | orchestrator | 2026-04-09 01:04:53.510903 | orchestrator | 2026-04-09 01:04:53.510908 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:04:53.510913 | orchestrator | 2026-04-09 01:04:53.510917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:04:53.510921 | orchestrator | Thursday 09 April 2026 01:02:57 +0000 (0:00:00.534) 0:00:00.534 ******** 2026-04-09 01:04:53.510926 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:04:53.510930 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:04:53.510934 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:04:53.510938 | orchestrator | 2026-04-09 01:04:53.510956 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:04:53.510966 | orchestrator | Thursday 09 April 2026 01:02:58 +0000 (0:00:00.555) 0:00:01.090 ******** 2026-04-09 01:04:53.510974 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-09 01:04:53.510981 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-09 01:04:53.510987 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-09 01:04:53.510994 | orchestrator | 2026-04-09 01:04:53.511000 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-09 01:04:53.511006 | orchestrator | 2026-04-09 01:04:53.511012 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 01:04:53.511018 | orchestrator | Thursday 09 April 2026 01:02:58 +0000 (0:00:00.315) 0:00:01.406 ******** 2026-04-09 01:04:53.511025 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:04:53.511032 | orchestrator | 2026-04-09 01:04:53.511038 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-09 01:04:53.511045 | orchestrator | Thursday 09 April 2026 01:02:59 +0000 (0:00:01.048) 0:00:02.454 ******** 2026-04-09 01:04:53.511051 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-09 01:04:53.511057 | orchestrator | 2026-04-09 01:04:53.511064 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-04-09 01:04:53.511070 | orchestrator | Thursday 09 April 2026 01:03:04 +0000 (0:00:04.388) 0:00:06.843 ******** 2026-04-09 01:04:53.511077 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-09 01:04:53.511084 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-09 01:04:53.511092 | orchestrator | 2026-04-09 01:04:53.511098 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-09 01:04:53.511106 | orchestrator | Thursday 09 April 2026 01:03:11 +0000 (0:00:07.721) 0:00:14.564 ******** 2026-04-09 01:04:53.511110 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:04:53.511114 | orchestrator | 2026-04-09 01:04:53.511118 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-09 01:04:53.511122 | orchestrator | Thursday 09 April 2026 01:03:15 +0000 (0:00:03.474) 0:00:18.039 ******** 2026-04-09 01:04:53.511126 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-09 01:04:53.511130 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:04:53.511133 | orchestrator | 2026-04-09 01:04:53.511139 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-09 01:04:53.511146 | orchestrator | Thursday 09 April 2026 01:03:20 +0000 (0:00:04.859) 0:00:22.898 ******** 2026-04-09 01:04:53.511152 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:04:53.511159 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-09 01:04:53.511165 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-09 01:04:53.511172 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-09 01:04:53.511179 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-09 01:04:53.511185 | orchestrator | 2026-04-09 01:04:53.511191 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-04-09 01:04:53.511198 | orchestrator | Thursday 09 April 2026 01:03:37 +0000 (0:00:17.246) 0:00:40.145 ******** 2026-04-09 01:04:53.511205 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-09 01:04:53.511211 | orchestrator | 2026-04-09 01:04:53.511218 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-09 01:04:53.511224 | orchestrator | Thursday 09 April 2026 01:03:42 +0000 (0:00:04.477) 0:00:44.622 ******** 2026-04-09 01:04:53.511249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.511274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.511287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.511292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511319 | orchestrator | 2026-04-09 01:04:53.511323 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-09 01:04:53.511352 | orchestrator | Thursday 09 April 2026 01:03:44 +0000 (0:00:02.834) 0:00:47.457 ******** 2026-04-09 01:04:53.511357 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-09 01:04:53.511360 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-09 01:04:53.511395 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-09 01:04:53.511402 | orchestrator | 2026-04-09 01:04:53.511410 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-09 01:04:53.511418 | orchestrator | Thursday 09 April 2026 01:03:45 +0000 (0:00:01.129) 0:00:48.586 ******** 2026-04-09 01:04:53.511425 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.511432 | orchestrator | 2026-04-09 01:04:53.511438 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-09 01:04:53.511444 | orchestrator | Thursday 09 April 2026 01:03:46 +0000 (0:00:00.169) 0:00:48.756 ******** 2026-04-09 01:04:53.511450 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.511462 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:53.511466 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:53.511470 | orchestrator | 2026-04-09 01:04:53.511473 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 01:04:53.511477 | orchestrator | Thursday 09 April 2026 01:03:46 +0000 (0:00:00.420) 0:00:49.176 ******** 2026-04-09 01:04:53.511481 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:04:53.511485 | orchestrator | 2026-04-09 01:04:53.511489 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-09 01:04:53.511494 | orchestrator | Thursday 09 April 2026 01:03:47 +0000 (0:00:00.602) 0:00:49.779 ******** 2026-04-09 01:04:53.511500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.511510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.511515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.511520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.511560 | orchestrator | 2026-04-09 01:04:53.511570 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-09 01:04:53.511576 | orchestrator | Thursday 09 April 2026 01:03:50 +0000 (0:00:03.379) 0:00:53.160 ******** 2026-04-09 01:04:53.511588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.511595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511612 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.511619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.511626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511645 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:53.511653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.511664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511675 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:53.511680 | orchestrator | 2026-04-09 01:04:53.511684 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-09 01:04:53.511689 | orchestrator | Thursday 09 April 2026 01:03:51 +0000 (0:00:00.855) 0:00:54.015 ******** 2026-04-09 01:04:53.511694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.511703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511712 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.511723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.511730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511748 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:53.511756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.511763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.511774 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:53.511779 | orchestrator | 2026-04-09 01:04:53.511784 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-09 01:04:53.511789 | orchestrator | Thursday 09 April 2026 01:03:52 +0000 (0:00:01.400) 0:00:55.416 ******** 2026-04-09 01:04:53.512002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512082 | orchestrator | 2026-04-09 01:04:53.512086 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-09 01:04:53.512090 | orchestrator | Thursday 09 April 2026 01:03:56 +0000 (0:00:03.690) 0:00:59.107 ******** 2026-04-09 01:04:53.512094 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:53.512098 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:53.512102 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:53.512106 | orchestrator | 2026-04-09 01:04:53.512109 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-09 01:04:53.512113 | orchestrator | Thursday 09 April 2026 01:03:57 +0000 (0:00:01.446) 0:01:00.553 ******** 2026-04-09 01:04:53.512117 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:04:53.512121 | orchestrator | 2026-04-09 01:04:53.512125 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-09 01:04:53.512128 | orchestrator | Thursday 09 April 2026 01:03:58 +0000 (0:00:00.968) 0:01:01.522 ******** 2026-04-09 01:04:53.512132 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.512136 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:53.512140 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:53.512144 | orchestrator | 2026-04-09 01:04:53.512147 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-09 01:04:53.512151 | orchestrator | Thursday 09 April 2026 01:03:59 +0000 (0:00:00.508) 0:01:02.031 ******** 2026-04-09 01:04:53.512158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512222 | orchestrator | 2026-04-09 01:04:53.512225 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-09 01:04:53.512249 | orchestrator | Thursday 09 April 2026 01:04:05 +0000 (0:00:06.190) 0:01:08.221 ******** 2026-04-09 01:04:53.512262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.512276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512294 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.512306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.512313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512326 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:53.512358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.512397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512413 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:53.512417 | orchestrator | 2026-04-09 01:04:53.512422 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-09 01:04:53.512429 | orchestrator | Thursday 09 April 2026 01:04:06 +0000 (0:00:00.501) 0:01:08.723 ******** 2026-04-09 01:04:53.512440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:04:53.512475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:53.512525 | orchestrator | 2026-04-09 01:04:53.512532 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-09 01:04:53.512538 | orchestrator | Thursday 09 April 2026 01:04:08 +0000 (0:00:02.478) 0:01:11.201 ******** 2026-04-09 01:04:53.512545 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:04:53.512550 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:04:53.512554 | orchestrator | } 2026-04-09 01:04:53.512558 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:04:53.512562 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:04:53.512566 | orchestrator | } 2026-04-09 01:04:53.512569 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:04:53.512573 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:04:53.512577 | orchestrator | } 2026-04-09 01:04:53.512581 | orchestrator | 2026-04-09 01:04:53.512585 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:04:53.512589 | orchestrator | Thursday 09 April 2026 01:04:08 +0000 (0:00:00.284) 0:01:11.486 ******** 2026-04-09 01:04:53.512593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.512598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512609 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.512618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.512623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512633 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:53.512638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:04:53.512643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:53.512658 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:53.512662 | orchestrator | 2026-04-09 01:04:53.512667 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 01:04:53.512672 | orchestrator | Thursday 09 April 2026 01:04:09 +0000 (0:00:00.914) 0:01:12.400 ******** 2026-04-09 01:04:53.512676 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:53.512681 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:53.512685 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:53.512690 | orchestrator | 2026-04-09 01:04:53.512694 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-09 01:04:53.512701 | orchestrator | Thursday 09 April 2026 01:04:10 +0000 (0:00:00.260) 0:01:12.660 ******** 2026-04-09 01:04:53.512706 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:53.512710 | orchestrator | 2026-04-09 01:04:53.512715 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-09 01:04:53.512720 | orchestrator | Thursday 09 April 2026 01:04:12 +0000 (0:00:02.417) 0:01:15.077 ******** 2026-04-09 01:04:53.512724 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:53.512729 | orchestrator | 2026-04-09 01:04:53.512733 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-09 01:04:53.512738 | orchestrator | Thursday 09 April 2026 01:04:15 +0000 (0:00:02.684) 0:01:17.761 ******** 2026-04-09 01:04:53.512742 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:53.512747 | orchestrator | 2026-04-09 01:04:53.512752 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 01:04:53.512756 | orchestrator | Thursday 09 April 2026 01:04:29 +0000 (0:00:14.075) 0:01:31.837 ******** 2026-04-09 01:04:53.512761 | orchestrator | 2026-04-09 01:04:53.512766 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 01:04:53.512770 | orchestrator | Thursday 09 April 2026 01:04:29 +0000 (0:00:00.063) 0:01:31.900 ******** 2026-04-09 01:04:53.512775 | orchestrator | 2026-04-09 01:04:53.512779 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 01:04:53.512784 | orchestrator | Thursday 09 April 2026 01:04:29 +0000 (0:00:00.061) 0:01:31.961 ******** 2026-04-09 01:04:53.512788 | orchestrator | 2026-04-09 01:04:53.512793 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-09 01:04:53.512797 | orchestrator | Thursday 09 April 2026 01:04:29 +0000 (0:00:00.061) 0:01:32.023 ******** 2026-04-09 01:04:53.512802 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:53.512807 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:53.512811 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:53.512816 | orchestrator | 2026-04-09 01:04:53.512821 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-09 01:04:53.512825 | orchestrator | Thursday 09 April 2026 01:04:35 +0000 (0:00:06.337) 0:01:38.360 ******** 2026-04-09 01:04:53.512830 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:53.512837 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:53.512842 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:53.512847 | orchestrator | 2026-04-09 01:04:53.512851 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-09 01:04:53.512856 | orchestrator | Thursday 09 April 2026 01:04:40 +0000 (0:00:05.228) 0:01:43.589 ******** 2026-04-09 01:04:53.512861 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:53.512865 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:53.512870 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:53.512874 | orchestrator | 2026-04-09 01:04:53.512879 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:04:53.512884 | orchestrator | testbed-node-0 : ok=25  changed=19  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 01:04:53.512889 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:04:53.512893 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:04:53.512898 | orchestrator | 2026-04-09 01:04:53.512902 | orchestrator | 2026-04-09 01:04:53.512907 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:04:53.512912 | orchestrator | Thursday 09 April 2026 01:04:51 +0000 (0:00:10.458) 0:01:54.048 ******** 2026-04-09 01:04:53.512916 | orchestrator | =============================================================================== 2026-04-09 01:04:53.512921 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.25s 2026-04-09 01:04:53.512925 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 14.08s 2026-04-09 01:04:53.512929 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.46s 2026-04-09 01:04:53.512934 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 7.72s 2026-04-09 01:04:53.512938 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.34s 2026-04-09 01:04:53.512942 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.19s 2026-04-09 01:04:53.512947 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.23s 2026-04-09 01:04:53.512951 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.86s 2026-04-09 01:04:53.512956 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 4.48s 2026-04-09 01:04:53.512962 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 4.39s 2026-04-09 01:04:53.512967 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.69s 2026-04-09 01:04:53.512971 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.47s 2026-04-09 01:04:53.512976 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.38s 2026-04-09 01:04:53.512982 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.83s 2026-04-09 01:04:53.512988 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.68s 2026-04-09 01:04:53.512994 | orchestrator | service-check-containers : barbican | Check containers ------------------ 2.48s 2026-04-09 01:04:53.513000 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.42s 2026-04-09 01:04:53.513007 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.45s 2026-04-09 01:04:53.513013 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.40s 2026-04-09 01:04:53.513024 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.13s 2026-04-09 01:04:53.513032 | orchestrator | 2026-04-09 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:56.545827 | orchestrator | 2026-04-09 01:04:56 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:04:56.546379 | orchestrator | 2026-04-09 01:04:56 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:56.550079 | orchestrator | 2026-04-09 01:04:56 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:56.555632 | orchestrator | 2026-04-09 01:04:56 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:56.555690 | orchestrator | 2026-04-09 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:59.590651 | orchestrator | 2026-04-09 01:04:59 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:04:59.591995 | orchestrator | 2026-04-09 01:04:59 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:04:59.593520 | orchestrator | 2026-04-09 01:04:59 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:04:59.595020 | orchestrator | 2026-04-09 01:04:59 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:04:59.595046 | orchestrator | 2026-04-09 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:02.632307 | orchestrator | 2026-04-09 01:05:02 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:02.632745 | orchestrator | 2026-04-09 01:05:02 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:02.633854 | orchestrator | 2026-04-09 01:05:02 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:02.635418 | orchestrator | 2026-04-09 01:05:02 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:02.635454 | orchestrator | 2026-04-09 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:05.705140 | orchestrator | 2026-04-09 01:05:05 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:05.708144 | orchestrator | 2026-04-09 01:05:05 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:05.709694 | orchestrator | 2026-04-09 01:05:05 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:05.710839 | orchestrator | 2026-04-09 01:05:05 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:05.710924 | orchestrator | 2026-04-09 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:08.747394 | orchestrator | 2026-04-09 01:05:08 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:08.748647 | orchestrator | 2026-04-09 01:05:08 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:08.749245 | orchestrator | 2026-04-09 01:05:08 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:08.752765 | orchestrator | 2026-04-09 01:05:08 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:08.752809 | orchestrator | 2026-04-09 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:11.791656 | orchestrator | 2026-04-09 01:05:11 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:11.792101 | orchestrator | 2026-04-09 01:05:11 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:11.792972 | orchestrator | 2026-04-09 01:05:11 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:11.793911 | orchestrator | 2026-04-09 01:05:11 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:11.793964 | orchestrator | 2026-04-09 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:14.840509 | orchestrator | 2026-04-09 01:05:14 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:14.842162 | orchestrator | 2026-04-09 01:05:14 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:14.845831 | orchestrator | 2026-04-09 01:05:14 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:14.848852 | orchestrator | 2026-04-09 01:05:14 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:14.848999 | orchestrator | 2026-04-09 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:17.891573 | orchestrator | 2026-04-09 01:05:17 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:17.892657 | orchestrator | 2026-04-09 01:05:17 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:17.893128 | orchestrator | 2026-04-09 01:05:17 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:17.893873 | orchestrator | 2026-04-09 01:05:17 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:17.893902 | orchestrator | 2026-04-09 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:20.938541 | orchestrator | 2026-04-09 01:05:20 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:20.941033 | orchestrator | 2026-04-09 01:05:20 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:20.942837 | orchestrator | 2026-04-09 01:05:20 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:20.944399 | orchestrator | 2026-04-09 01:05:20 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:20.944509 | orchestrator | 2026-04-09 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:23.997373 | orchestrator | 2026-04-09 01:05:23 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:24.000984 | orchestrator | 2026-04-09 01:05:23 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:24.003469 | orchestrator | 2026-04-09 01:05:24 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:24.004784 | orchestrator | 2026-04-09 01:05:24 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:24.004901 | orchestrator | 2026-04-09 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:27.049284 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:27.051362 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:27.054204 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:27.056343 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:27.056384 | orchestrator | 2026-04-09 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:30.102073 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:30.103076 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:30.104939 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:30.106011 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:30.106062 | orchestrator | 2026-04-09 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:33.157438 | orchestrator | 2026-04-09 01:05:33 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:33.157504 | orchestrator | 2026-04-09 01:05:33 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:33.158991 | orchestrator | 2026-04-09 01:05:33 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:33.160255 | orchestrator | 2026-04-09 01:05:33 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:33.160302 | orchestrator | 2026-04-09 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:36.202447 | orchestrator | 2026-04-09 01:05:36 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:36.203259 | orchestrator | 2026-04-09 01:05:36 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:36.204313 | orchestrator | 2026-04-09 01:05:36 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:36.205521 | orchestrator | 2026-04-09 01:05:36 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:36.205577 | orchestrator | 2026-04-09 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:39.253137 | orchestrator | 2026-04-09 01:05:39 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:39.255701 | orchestrator | 2026-04-09 01:05:39 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:39.257609 | orchestrator | 2026-04-09 01:05:39 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:39.259352 | orchestrator | 2026-04-09 01:05:39 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:39.259400 | orchestrator | 2026-04-09 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:42.296671 | orchestrator | 2026-04-09 01:05:42 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:42.298825 | orchestrator | 2026-04-09 01:05:42 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:42.300630 | orchestrator | 2026-04-09 01:05:42 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:42.302360 | orchestrator | 2026-04-09 01:05:42 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:42.302459 | orchestrator | 2026-04-09 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:45.356542 | orchestrator | 2026-04-09 01:05:45 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:45.356597 | orchestrator | 2026-04-09 01:05:45 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:45.356603 | orchestrator | 2026-04-09 01:05:45 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:45.356609 | orchestrator | 2026-04-09 01:05:45 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:45.356615 | orchestrator | 2026-04-09 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:48.393028 | orchestrator | 2026-04-09 01:05:48 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state STARTED 2026-04-09 01:05:48.399407 | orchestrator | 2026-04-09 01:05:48 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:48.401577 | orchestrator | 2026-04-09 01:05:48 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:48.404204 | orchestrator | 2026-04-09 01:05:48 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:48.404429 | orchestrator | 2026-04-09 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:51.435390 | orchestrator | 2026-04-09 01:05:51 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:05:51.435438 | orchestrator | 2026-04-09 01:05:51 | INFO  | Task e3f6119c-a0f4-43f8-811e-46c40b3aafab is in state SUCCESS 2026-04-09 01:05:51.436659 | orchestrator | 2026-04-09 01:05:51 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:51.438779 | orchestrator | 2026-04-09 01:05:51 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:51.439947 | orchestrator | 2026-04-09 01:05:51 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:51.439983 | orchestrator | 2026-04-09 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:54.486399 | orchestrator | 2026-04-09 01:05:54 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:05:54.489019 | orchestrator | 2026-04-09 01:05:54 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:54.491415 | orchestrator | 2026-04-09 01:05:54 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:54.493565 | orchestrator | 2026-04-09 01:05:54 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:54.493776 | orchestrator | 2026-04-09 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:57.536609 | orchestrator | 2026-04-09 01:05:57 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:05:57.538357 | orchestrator | 2026-04-09 01:05:57 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state STARTED 2026-04-09 01:05:57.538425 | orchestrator | 2026-04-09 01:05:57 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:05:57.538683 | orchestrator | 2026-04-09 01:05:57 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:05:57.538828 | orchestrator | 2026-04-09 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:00.580361 | orchestrator | 2026-04-09 01:06:00 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:00.581487 | orchestrator | 2026-04-09 01:06:00 | INFO  | Task b53a516e-dc2f-4bb7-b26b-7af535c591da is in state SUCCESS 2026-04-09 01:06:00.582867 | orchestrator | 2026-04-09 01:06:00.582925 | orchestrator | 2026-04-09 01:06:00.582934 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-09 01:06:00.582941 | orchestrator | 2026-04-09 01:06:00.582949 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-09 01:06:00.582956 | orchestrator | Thursday 09 April 2026 01:04:56 +0000 (0:00:00.096) 0:00:00.096 ******** 2026-04-09 01:06:00.582963 | orchestrator | changed: [localhost] 2026-04-09 01:06:00.582972 | orchestrator | 2026-04-09 01:06:00.582979 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-09 01:06:00.583002 | orchestrator | Thursday 09 April 2026 01:04:57 +0000 (0:00:00.887) 0:00:00.983 ******** 2026-04-09 01:06:00.583009 | orchestrator | changed: [localhost] 2026-04-09 01:06:00.583016 | orchestrator | 2026-04-09 01:06:00.583030 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-09 01:06:00.583056 | orchestrator | Thursday 09 April 2026 01:05:44 +0000 (0:00:46.841) 0:00:47.825 ******** 2026-04-09 01:06:00.583067 | orchestrator | changed: [localhost] 2026-04-09 01:06:00.583078 | orchestrator | 2026-04-09 01:06:00.583084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:06:00.583090 | orchestrator | 2026-04-09 01:06:00.583096 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:06:00.583101 | orchestrator | Thursday 09 April 2026 01:05:48 +0000 (0:00:04.923) 0:00:52.748 ******** 2026-04-09 01:06:00.583106 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:00.583111 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:00.583115 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:00.583121 | orchestrator | 2026-04-09 01:06:00.583126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:06:00.583132 | orchestrator | Thursday 09 April 2026 01:05:49 +0000 (0:00:00.265) 0:00:53.014 ******** 2026-04-09 01:06:00.583156 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-09 01:06:00.583163 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-09 01:06:00.583168 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-09 01:06:00.583174 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-09 01:06:00.583180 | orchestrator | 2026-04-09 01:06:00.583186 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-09 01:06:00.583192 | orchestrator | skipping: no hosts matched 2026-04-09 01:06:00.583220 | orchestrator | 2026-04-09 01:06:00.583226 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:06:00.583252 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:00.583259 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:00.583266 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:00.583271 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:00.583277 | orchestrator | 2026-04-09 01:06:00.583283 | orchestrator | 2026-04-09 01:06:00.583289 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:06:00.583294 | orchestrator | Thursday 09 April 2026 01:05:49 +0000 (0:00:00.365) 0:00:53.379 ******** 2026-04-09 01:06:00.583300 | orchestrator | =============================================================================== 2026-04-09 01:06:00.583310 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 46.84s 2026-04-09 01:06:00.583316 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.92s 2026-04-09 01:06:00.583322 | orchestrator | Ensure the destination directory exists --------------------------------- 0.89s 2026-04-09 01:06:00.583327 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-04-09 01:06:00.583333 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-09 01:06:00.583339 | orchestrator | 2026-04-09 01:06:00.583344 | orchestrator | 2026-04-09 01:06:00.583350 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:06:00.583355 | orchestrator | 2026-04-09 01:06:00.583377 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:06:00.583394 | orchestrator | Thursday 09 April 2026 01:01:50 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-09 01:06:00.583400 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:00.583414 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:00.583420 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:00.583426 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:00.583432 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:00.583443 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:00.583449 | orchestrator | 2026-04-09 01:06:00.583455 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:06:00.583461 | orchestrator | Thursday 09 April 2026 01:01:51 +0000 (0:00:00.563) 0:00:00.840 ******** 2026-04-09 01:06:00.583467 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-09 01:06:00.583472 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-09 01:06:00.583478 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-09 01:06:00.583484 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-09 01:06:00.583490 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-09 01:06:00.583496 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-09 01:06:00.583501 | orchestrator | 2026-04-09 01:06:00.583507 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-09 01:06:00.583513 | orchestrator | 2026-04-09 01:06:00.583519 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:00.583525 | orchestrator | Thursday 09 April 2026 01:01:52 +0000 (0:00:00.827) 0:00:01.667 ******** 2026-04-09 01:06:00.583541 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:06:00.583548 | orchestrator | 2026-04-09 01:06:00.583554 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-09 01:06:00.583559 | orchestrator | Thursday 09 April 2026 01:01:53 +0000 (0:00:01.109) 0:00:02.776 ******** 2026-04-09 01:06:00.583565 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:00.583571 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:00.583577 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:00.583583 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:00.583588 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:00.583594 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:00.583600 | orchestrator | 2026-04-09 01:06:00.583606 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-09 01:06:00.583612 | orchestrator | Thursday 09 April 2026 01:01:54 +0000 (0:00:01.625) 0:00:04.402 ******** 2026-04-09 01:06:00.583618 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:00.583624 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:00.583630 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:00.583636 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:00.583641 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:00.583647 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:00.583653 | orchestrator | 2026-04-09 01:06:00.583659 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-09 01:06:00.583665 | orchestrator | Thursday 09 April 2026 01:01:56 +0000 (0:00:01.313) 0:00:05.715 ******** 2026-04-09 01:06:00.583670 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 01:06:00.583676 | orchestrator |  "changed": false, 2026-04-09 01:06:00.583682 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:00.583689 | orchestrator | } 2026-04-09 01:06:00.583694 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 01:06:00.583700 | orchestrator |  "changed": false, 2026-04-09 01:06:00.583706 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:00.583712 | orchestrator | } 2026-04-09 01:06:00.583718 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 01:06:00.583724 | orchestrator |  "changed": false, 2026-04-09 01:06:00.583730 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:00.583735 | orchestrator | } 2026-04-09 01:06:00.583741 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 01:06:00.583747 | orchestrator |  "changed": false, 2026-04-09 01:06:00.583753 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:00.583759 | orchestrator | } 2026-04-09 01:06:00.583765 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 01:06:00.583770 | orchestrator |  "changed": false, 2026-04-09 01:06:00.583776 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:00.583786 | orchestrator | } 2026-04-09 01:06:00.583792 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 01:06:00.583798 | orchestrator |  "changed": false, 2026-04-09 01:06:00.583804 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:00.583810 | orchestrator | } 2026-04-09 01:06:00.583815 | orchestrator | 2026-04-09 01:06:00.583821 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-09 01:06:00.583827 | orchestrator | Thursday 09 April 2026 01:01:56 +0000 (0:00:00.522) 0:00:06.238 ******** 2026-04-09 01:06:00.583833 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.583839 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.583845 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.583850 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.583856 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.583862 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.583868 | orchestrator | 2026-04-09 01:06:00.583874 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-09 01:06:00.583880 | orchestrator | Thursday 09 April 2026 01:01:57 +0000 (0:00:00.709) 0:00:06.948 ******** 2026-04-09 01:06:00.583885 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-09 01:06:00.583891 | orchestrator | 2026-04-09 01:06:00.583897 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-04-09 01:06:00.583903 | orchestrator | Thursday 09 April 2026 01:02:00 +0000 (0:00:03.374) 0:00:10.322 ******** 2026-04-09 01:06:00.583909 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-09 01:06:00.583915 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-09 01:06:00.583921 | orchestrator | 2026-04-09 01:06:00.583926 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-09 01:06:00.583932 | orchestrator | Thursday 09 April 2026 01:02:07 +0000 (0:00:07.163) 0:00:17.486 ******** 2026-04-09 01:06:00.583938 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:06:00.583947 | orchestrator | 2026-04-09 01:06:00.583953 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-09 01:06:00.583959 | orchestrator | Thursday 09 April 2026 01:02:11 +0000 (0:00:03.760) 0:00:21.246 ******** 2026-04-09 01:06:00.583965 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-09 01:06:00.583971 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:06:00.583977 | orchestrator | 2026-04-09 01:06:00.583983 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-09 01:06:00.583988 | orchestrator | Thursday 09 April 2026 01:02:15 +0000 (0:00:04.143) 0:00:25.389 ******** 2026-04-09 01:06:00.583994 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:06:00.584000 | orchestrator | 2026-04-09 01:06:00.584006 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-04-09 01:06:00.584012 | orchestrator | Thursday 09 April 2026 01:02:19 +0000 (0:00:03.712) 0:00:29.102 ******** 2026-04-09 01:06:00.584018 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-09 01:06:00.584024 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-09 01:06:00.584030 | orchestrator | 2026-04-09 01:06:00.584036 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:00.584042 | orchestrator | Thursday 09 April 2026 01:02:27 +0000 (0:00:08.273) 0:00:37.376 ******** 2026-04-09 01:06:00.584048 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.584054 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.584063 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.584069 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.584074 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.584080 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.584086 | orchestrator | 2026-04-09 01:06:00.584092 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-09 01:06:00.584101 | orchestrator | Thursday 09 April 2026 01:02:28 +0000 (0:00:00.680) 0:00:38.056 ******** 2026-04-09 01:06:00.584106 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.584111 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.584116 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.584121 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.584126 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.584131 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.584153 | orchestrator | 2026-04-09 01:06:00.584158 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-09 01:06:00.584162 | orchestrator | Thursday 09 April 2026 01:02:31 +0000 (0:00:02.799) 0:00:40.856 ******** 2026-04-09 01:06:00.584167 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:00.584172 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:00.584177 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:00.584182 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:00.584186 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:00.584191 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:00.584195 | orchestrator | 2026-04-09 01:06:00.584200 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 01:06:00.584205 | orchestrator | Thursday 09 April 2026 01:02:32 +0000 (0:00:00.891) 0:00:41.747 ******** 2026-04-09 01:06:00.584209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.584214 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.584220 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.584225 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.584230 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.584235 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.584239 | orchestrator | 2026-04-09 01:06:00.584244 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-09 01:06:00.584249 | orchestrator | Thursday 09 April 2026 01:02:34 +0000 (0:00:02.536) 0:00:44.283 ******** 2026-04-09 01:06:00.584256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.584270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.584285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.584292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.584297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.584302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.584308 | orchestrator | 2026-04-09 01:06:00.584312 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-09 01:06:00.584318 | orchestrator | Thursday 09 April 2026 01:02:38 +0000 (0:00:03.507) 0:00:47.791 ******** 2026-04-09 01:06:00.584323 | orchestrator | [WARNING]: Skipped 2026-04-09 01:06:00.584328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-09 01:06:00.584334 | orchestrator | due to this access issue: 2026-04-09 01:06:00.584338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-09 01:06:00.584345 | orchestrator | a directory 2026-04-09 01:06:00.584353 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:06:00.584358 | orchestrator | 2026-04-09 01:06:00.584363 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:00.584368 | orchestrator | Thursday 09 April 2026 01:02:38 +0000 (0:00:00.619) 0:00:48.410 ******** 2026-04-09 01:06:00.584373 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:06:00.584378 | orchestrator | 2026-04-09 01:06:00.584383 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-09 01:06:00.584388 | orchestrator | Thursday 09 April 2026 01:02:39 +0000 (0:00:00.900) 0:00:49.310 ******** 2026-04-09 01:06:00.584398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.584405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.584411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.584420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.584429 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.584439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.584445 | orchestrator | 2026-04-09 01:06:00.584450 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-09 01:06:00.584455 | orchestrator | Thursday 09 April 2026 01:02:42 +0000 (0:00:03.167) 0:00:52.478 ******** 2026-04-09 01:06:00.584459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.584464 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.584469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.584477 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.584484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.584489 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.584498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.584503 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.584526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.584531 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.584537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.584541 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.584554 | orchestrator | 2026-04-09 01:06:00.584558 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-09 01:06:00.584577 | orchestrator | Thursday 09 April 2026 01:02:44 +0000 (0:00:01.660) 0:00:54.139 ******** 2026-04-09 01:06:00.584591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.584602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.584608 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.584613 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.584619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.584624 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.584630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.584641 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.584646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.584652 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.584661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.584666 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.584672 | orchestrator | 2026-04-09 01:06:00.584678 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-09 01:06:00.584684 | orchestrator | Thursday 09 April 2026 01:02:47 +0000 (0:00:02.544) 0:00:56.684 ******** 2026-04-09 01:06:00.584689 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.584695 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.584700 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.584706 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585060 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585078 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585084 | orchestrator | 2026-04-09 01:06:00.585090 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-09 01:06:00.585096 | orchestrator | Thursday 09 April 2026 01:02:48 +0000 (0:00:01.865) 0:00:58.549 ******** 2026-04-09 01:06:00.585101 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585106 | orchestrator | 2026-04-09 01:06:00.585111 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-09 01:06:00.585116 | orchestrator | Thursday 09 April 2026 01:02:49 +0000 (0:00:00.237) 0:00:58.787 ******** 2026-04-09 01:06:00.585121 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585126 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.585132 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.585198 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585204 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585210 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585215 | orchestrator | 2026-04-09 01:06:00.585221 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-09 01:06:00.585227 | orchestrator | Thursday 09 April 2026 01:02:49 +0000 (0:00:00.499) 0:00:59.287 ******** 2026-04-09 01:06:00.585233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.585249 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.585262 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.585272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.585279 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.585291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585298 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585314 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585341 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585347 | orchestrator | 2026-04-09 01:06:00.585352 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-09 01:06:00.585358 | orchestrator | Thursday 09 April 2026 01:02:51 +0000 (0:00:02.297) 0:01:01.584 ******** 2026-04-09 01:06:00.585366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.585400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.585409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.585415 | orchestrator | 2026-04-09 01:06:00.585421 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-09 01:06:00.585427 | orchestrator | Thursday 09 April 2026 01:02:54 +0000 (0:00:02.916) 0:01:04.500 ******** 2026-04-09 01:06:00.585437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.585462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.585471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.585490 | orchestrator | 2026-04-09 01:06:00.585496 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-09 01:06:00.585502 | orchestrator | Thursday 09 April 2026 01:03:01 +0000 (0:00:06.111) 0:01:10.612 ******** 2026-04-09 01:06:00.585508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.585514 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.585520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.585526 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.585544 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.585554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585559 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585571 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585583 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585589 | orchestrator | 2026-04-09 01:06:00.585594 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-09 01:06:00.585600 | orchestrator | Thursday 09 April 2026 01:03:02 +0000 (0:00:01.554) 0:01:12.166 ******** 2026-04-09 01:06:00.585606 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585612 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585618 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:00.585623 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585629 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:00.585635 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:00.585641 | orchestrator | 2026-04-09 01:06:00.585646 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-09 01:06:00.585652 | orchestrator | Thursday 09 April 2026 01:03:04 +0000 (0:00:02.114) 0:01:14.281 ******** 2026-04-09 01:06:00.585663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585673 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585691 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.585705 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.585744 | orchestrator | 2026-04-09 01:06:00.585751 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-09 01:06:00.585758 | orchestrator | Thursday 09 April 2026 01:03:07 +0000 (0:00:02.945) 0:01:17.226 ******** 2026-04-09 01:06:00.585765 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585772 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585778 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.585786 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.585793 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585799 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585806 | orchestrator | 2026-04-09 01:06:00.585813 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-09 01:06:00.585820 | orchestrator | Thursday 09 April 2026 01:03:09 +0000 (0:00:02.194) 0:01:19.421 ******** 2026-04-09 01:06:00.585826 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.585833 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.585841 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585847 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585853 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585860 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585867 | orchestrator | 2026-04-09 01:06:00.585873 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-09 01:06:00.585880 | orchestrator | Thursday 09 April 2026 01:03:11 +0000 (0:00:01.799) 0:01:21.220 ******** 2026-04-09 01:06:00.585887 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.585893 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585900 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585906 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.585913 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585920 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585926 | orchestrator | 2026-04-09 01:06:00.585933 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-09 01:06:00.585940 | orchestrator | Thursday 09 April 2026 01:03:13 +0000 (0:00:01.678) 0:01:22.899 ******** 2026-04-09 01:06:00.585947 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.585953 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.585960 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.585966 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.585973 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.585979 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.585986 | orchestrator | 2026-04-09 01:06:00.585992 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-09 01:06:00.585999 | orchestrator | Thursday 09 April 2026 01:03:14 +0000 (0:00:01.666) 0:01:24.566 ******** 2026-04-09 01:06:00.586006 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586041 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586051 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586058 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586070 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586077 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586085 | orchestrator | 2026-04-09 01:06:00.586091 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-09 01:06:00.586098 | orchestrator | Thursday 09 April 2026 01:03:17 +0000 (0:00:02.239) 0:01:26.806 ******** 2026-04-09 01:06:00.586104 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:00.586110 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586116 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:00.586121 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586127 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:00.586132 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586165 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:00.586171 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586177 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:00.586182 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586188 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:00.586193 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586199 | orchestrator | 2026-04-09 01:06:00.586209 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-09 01:06:00.586215 | orchestrator | Thursday 09 April 2026 01:03:19 +0000 (0:00:02.166) 0:01:28.972 ******** 2026-04-09 01:06:00.586226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.586234 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.586246 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.586263 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.586274 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.586288 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.586304 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586310 | orchestrator | 2026-04-09 01:06:00.586316 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-09 01:06:00.586322 | orchestrator | Thursday 09 April 2026 01:03:21 +0000 (0:00:02.173) 0:01:31.145 ******** 2026-04-09 01:06:00.586328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.586337 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.586350 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.586365 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.586380 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.586396 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.586408 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586414 | orchestrator | 2026-04-09 01:06:00.586420 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-09 01:06:00.586426 | orchestrator | Thursday 09 April 2026 01:03:23 +0000 (0:00:01.621) 0:01:32.767 ******** 2026-04-09 01:06:00.586431 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586437 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586443 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586449 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586454 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586460 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586466 | orchestrator | 2026-04-09 01:06:00.586472 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-09 01:06:00.586478 | orchestrator | Thursday 09 April 2026 01:03:24 +0000 (0:00:01.624) 0:01:34.392 ******** 2026-04-09 01:06:00.586484 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586490 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586495 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586501 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:06:00.586507 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:06:00.586512 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:06:00.586518 | orchestrator | 2026-04-09 01:06:00.586524 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-09 01:06:00.586530 | orchestrator | Thursday 09 April 2026 01:03:28 +0000 (0:00:03.375) 0:01:37.768 ******** 2026-04-09 01:06:00.586536 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586541 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586546 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586550 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586555 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586559 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586563 | orchestrator | 2026-04-09 01:06:00.586568 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-09 01:06:00.586576 | orchestrator | Thursday 09 April 2026 01:03:30 +0000 (0:00:01.917) 0:01:39.685 ******** 2026-04-09 01:06:00.586585 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586595 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586602 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586608 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586614 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586621 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586627 | orchestrator | 2026-04-09 01:06:00.586637 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-09 01:06:00.586642 | orchestrator | Thursday 09 April 2026 01:03:33 +0000 (0:00:03.326) 0:01:43.012 ******** 2026-04-09 01:06:00.586647 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586654 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586660 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586666 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586672 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586677 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586683 | orchestrator | 2026-04-09 01:06:00.586690 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-09 01:06:00.586699 | orchestrator | Thursday 09 April 2026 01:03:35 +0000 (0:00:02.039) 0:01:45.051 ******** 2026-04-09 01:06:00.586705 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586711 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586722 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586728 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586734 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586739 | orchestrator | 2026-04-09 01:06:00.586745 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-09 01:06:00.586751 | orchestrator | Thursday 09 April 2026 01:03:37 +0000 (0:00:02.204) 0:01:47.256 ******** 2026-04-09 01:06:00.586757 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586763 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586769 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586775 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586780 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586786 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586792 | orchestrator | 2026-04-09 01:06:00.586798 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-09 01:06:00.586804 | orchestrator | Thursday 09 April 2026 01:03:39 +0000 (0:00:01.634) 0:01:48.891 ******** 2026-04-09 01:06:00.586810 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586816 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586822 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586827 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586833 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586839 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586845 | orchestrator | 2026-04-09 01:06:00.586851 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-09 01:06:00.586857 | orchestrator | Thursday 09 April 2026 01:03:41 +0000 (0:00:01.827) 0:01:50.718 ******** 2026-04-09 01:06:00.586862 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586868 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586873 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586879 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586885 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586891 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586897 | orchestrator | 2026-04-09 01:06:00.586903 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-09 01:06:00.586909 | orchestrator | Thursday 09 April 2026 01:03:43 +0000 (0:00:02.081) 0:01:52.800 ******** 2026-04-09 01:06:00.586915 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:00.586922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.586929 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:00.586935 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.586940 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:00.586946 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.586957 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:00.586963 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.586968 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:00.586974 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.586979 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:00.586985 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.586991 | orchestrator | 2026-04-09 01:06:00.586997 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-09 01:06:00.587003 | orchestrator | Thursday 09 April 2026 01:03:45 +0000 (0:00:02.149) 0:01:54.950 ******** 2026-04-09 01:06:00.587012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.587019 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.587030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.587037 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.587043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.587049 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.587055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.587067 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.587076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.587082 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.587087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.587093 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.587099 | orchestrator | 2026-04-09 01:06:00.587106 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-09 01:06:00.587111 | orchestrator | Thursday 09 April 2026 01:03:47 +0000 (0:00:01.872) 0:01:56.822 ******** 2026-04-09 01:06:00.587116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.587123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.587163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:00.587174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.587184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.587191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:00.587200 | orchestrator | 2026-04-09 01:06:00.587206 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-09 01:06:00.587213 | orchestrator | Thursday 09 April 2026 01:03:50 +0000 (0:00:02.925) 0:01:59.748 ******** 2026-04-09 01:06:00.587218 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:06:00.587225 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:00.587230 | orchestrator | } 2026-04-09 01:06:00.587236 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:06:00.587242 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:00.587248 | orchestrator | } 2026-04-09 01:06:00.587254 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:06:00.587260 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:00.587266 | orchestrator | } 2026-04-09 01:06:00.587272 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 01:06:00.587278 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:00.587285 | orchestrator | } 2026-04-09 01:06:00.587291 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 01:06:00.587297 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:00.587304 | orchestrator | } 2026-04-09 01:06:00.587310 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 01:06:00.587317 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:00.587323 | orchestrator | } 2026-04-09 01:06:00.587329 | orchestrator | 2026-04-09 01:06:00.587335 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:06:00.587342 | orchestrator | Thursday 09 April 2026 01:03:50 +0000 (0:00:00.673) 0:02:00.421 ******** 2026-04-09 01:06:00.587351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.587357 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.587367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.587374 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.587380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.587390 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.587396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.587402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.587409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:00.587415 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.587424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:00.587430 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.587436 | orchestrator | 2026-04-09 01:06:00.587442 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:00.587447 | orchestrator | Thursday 09 April 2026 01:03:53 +0000 (0:00:02.883) 0:02:03.305 ******** 2026-04-09 01:06:00.587452 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:00.587458 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:00.587464 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:00.587469 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:00.587475 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:00.587481 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:00.587490 | orchestrator | 2026-04-09 01:06:00.587496 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-09 01:06:00.587505 | orchestrator | Thursday 09 April 2026 01:03:54 +0000 (0:00:00.595) 0:02:03.901 ******** 2026-04-09 01:06:00.587511 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:00.587515 | orchestrator | 2026-04-09 01:06:00.587520 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-09 01:06:00.587525 | orchestrator | Thursday 09 April 2026 01:03:56 +0000 (0:00:02.224) 0:02:06.125 ******** 2026-04-09 01:06:00.587530 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:00.587535 | orchestrator | 2026-04-09 01:06:00.587540 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-09 01:06:00.587544 | orchestrator | Thursday 09 April 2026 01:03:58 +0000 (0:00:02.474) 0:02:08.600 ******** 2026-04-09 01:06:00.587550 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:00.587556 | orchestrator | 2026-04-09 01:06:00.587562 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:00.587567 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:47.017) 0:02:55.617 ******** 2026-04-09 01:06:00.587572 | orchestrator | 2026-04-09 01:06:00.587578 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:00.587583 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:00.059) 0:02:55.677 ******** 2026-04-09 01:06:00.587589 | orchestrator | 2026-04-09 01:06:00.587594 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:00.587599 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:00.060) 0:02:55.738 ******** 2026-04-09 01:06:00.587604 | orchestrator | 2026-04-09 01:06:00.587610 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:00.587615 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:00.058) 0:02:55.796 ******** 2026-04-09 01:06:00.587619 | orchestrator | 2026-04-09 01:06:00.587625 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:00.587630 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:00.058) 0:02:55.855 ******** 2026-04-09 01:06:00.587635 | orchestrator | 2026-04-09 01:06:00.587640 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:00.587645 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:00.058) 0:02:55.914 ******** 2026-04-09 01:06:00.587649 | orchestrator | 2026-04-09 01:06:00.587655 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-09 01:06:00.587660 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:00.060) 0:02:55.975 ******** 2026-04-09 01:06:00.587665 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:00.587671 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:00.587676 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:00.587681 | orchestrator | 2026-04-09 01:06:00.587687 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-09 01:06:00.587692 | orchestrator | Thursday 09 April 2026 01:05:10 +0000 (0:00:24.184) 0:03:20.159 ******** 2026-04-09 01:06:00.587697 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:06:00.587703 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:06:00.587709 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:06:00.587714 | orchestrator | 2026-04-09 01:06:00.587720 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:06:00.587726 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:00.587732 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-09 01:06:00.587738 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-09 01:06:00.587749 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:00.587755 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:00.587765 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:00.587769 | orchestrator | 2026-04-09 01:06:00.587774 | orchestrator | 2026-04-09 01:06:00.587780 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:06:00.587784 | orchestrator | Thursday 09 April 2026 01:05:58 +0000 (0:00:47.832) 0:04:07.992 ******** 2026-04-09 01:06:00.587789 | orchestrator | =============================================================================== 2026-04-09 01:06:00.587794 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 47.83s 2026-04-09 01:06:00.587799 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.02s 2026-04-09 01:06:00.587804 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.18s 2026-04-09 01:06:00.587808 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 8.27s 2026-04-09 01:06:00.587813 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 7.16s 2026-04-09 01:06:00.587818 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.11s 2026-04-09 01:06:00.587823 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.14s 2026-04-09 01:06:00.587828 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.76s 2026-04-09 01:06:00.587833 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.71s 2026-04-09 01:06:00.587842 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.51s 2026-04-09 01:06:00.587848 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.38s 2026-04-09 01:06:00.587853 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 3.37s 2026-04-09 01:06:00.587858 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.33s 2026-04-09 01:06:00.587863 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.17s 2026-04-09 01:06:00.587867 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 2.95s 2026-04-09 01:06:00.587872 | orchestrator | service-check-containers : neutron | Check containers ------------------- 2.93s 2026-04-09 01:06:00.587877 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.92s 2026-04-09 01:06:00.587882 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.88s 2026-04-09 01:06:00.587886 | orchestrator | Load and persist kernel modules ----------------------------------------- 2.80s 2026-04-09 01:06:00.587891 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.55s 2026-04-09 01:06:00.587896 | orchestrator | 2026-04-09 01:06:00 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:00.587901 | orchestrator | 2026-04-09 01:06:00 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:00.587906 | orchestrator | 2026-04-09 01:06:00 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:00.587912 | orchestrator | 2026-04-09 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:03.620747 | orchestrator | 2026-04-09 01:06:03 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:03.622748 | orchestrator | 2026-04-09 01:06:03 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:03.625111 | orchestrator | 2026-04-09 01:06:03 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:03.627184 | orchestrator | 2026-04-09 01:06:03 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:03.627372 | orchestrator | 2026-04-09 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:06.663542 | orchestrator | 2026-04-09 01:06:06 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:06.665106 | orchestrator | 2026-04-09 01:06:06 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:06.667612 | orchestrator | 2026-04-09 01:06:06 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:06.669401 | orchestrator | 2026-04-09 01:06:06 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:06.669570 | orchestrator | 2026-04-09 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:09.711111 | orchestrator | 2026-04-09 01:06:09 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:09.712637 | orchestrator | 2026-04-09 01:06:09 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:09.714517 | orchestrator | 2026-04-09 01:06:09 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:09.716551 | orchestrator | 2026-04-09 01:06:09 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:09.716650 | orchestrator | 2026-04-09 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:12.765327 | orchestrator | 2026-04-09 01:06:12 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:12.765949 | orchestrator | 2026-04-09 01:06:12 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:12.767175 | orchestrator | 2026-04-09 01:06:12 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:12.768551 | orchestrator | 2026-04-09 01:06:12 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:12.768590 | orchestrator | 2026-04-09 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:15.808315 | orchestrator | 2026-04-09 01:06:15 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:15.810186 | orchestrator | 2026-04-09 01:06:15 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:15.812103 | orchestrator | 2026-04-09 01:06:15 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:15.813985 | orchestrator | 2026-04-09 01:06:15 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:15.814051 | orchestrator | 2026-04-09 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:18.843175 | orchestrator | 2026-04-09 01:06:18 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:18.843412 | orchestrator | 2026-04-09 01:06:18 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:18.844163 | orchestrator | 2026-04-09 01:06:18 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:18.844784 | orchestrator | 2026-04-09 01:06:18 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:18.844823 | orchestrator | 2026-04-09 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:21.869964 | orchestrator | 2026-04-09 01:06:21 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:21.870629 | orchestrator | 2026-04-09 01:06:21 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state STARTED 2026-04-09 01:06:21.871538 | orchestrator | 2026-04-09 01:06:21 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:21.873177 | orchestrator | 2026-04-09 01:06:21 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:21.873208 | orchestrator | 2026-04-09 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:24.898835 | orchestrator | 2026-04-09 01:06:24 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:24.901439 | orchestrator | 2026-04-09 01:06:24 | INFO  | Task 76b1c1c8-6c23-45a9-95a4-eb2ffd7a9d93 is in state SUCCESS 2026-04-09 01:06:24.902463 | orchestrator | 2026-04-09 01:06:24.903317 | orchestrator | 2026-04-09 01:06:24.903361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:06:24.903371 | orchestrator | 2026-04-09 01:06:24.903379 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:06:24.903386 | orchestrator | Thursday 09 April 2026 01:03:47 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-04-09 01:06:24.903393 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:24.903401 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:24.903408 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:24.903414 | orchestrator | 2026-04-09 01:06:24.903421 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:06:24.903428 | orchestrator | Thursday 09 April 2026 01:03:47 +0000 (0:00:00.327) 0:00:00.559 ******** 2026-04-09 01:06:24.903434 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-09 01:06:24.903441 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-09 01:06:24.903448 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-09 01:06:24.903455 | orchestrator | 2026-04-09 01:06:24.903461 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-09 01:06:24.903467 | orchestrator | 2026-04-09 01:06:24.903474 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:06:24.903480 | orchestrator | Thursday 09 April 2026 01:03:48 +0000 (0:00:00.622) 0:00:01.182 ******** 2026-04-09 01:06:24.903488 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:06:24.903495 | orchestrator | 2026-04-09 01:06:24.903501 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-09 01:06:24.903507 | orchestrator | Thursday 09 April 2026 01:03:49 +0000 (0:00:00.986) 0:00:02.168 ******** 2026-04-09 01:06:24.903513 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-09 01:06:24.903575 | orchestrator | 2026-04-09 01:06:24.903585 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-04-09 01:06:24.903592 | orchestrator | Thursday 09 April 2026 01:03:53 +0000 (0:00:04.003) 0:00:06.172 ******** 2026-04-09 01:06:24.903614 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-09 01:06:24.903623 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-09 01:06:24.903630 | orchestrator | 2026-04-09 01:06:24.903637 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-09 01:06:24.903645 | orchestrator | Thursday 09 April 2026 01:04:00 +0000 (0:00:07.298) 0:00:13.471 ******** 2026-04-09 01:06:24.903654 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:06:24.903768 | orchestrator | 2026-04-09 01:06:24.903778 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-09 01:06:24.903785 | orchestrator | Thursday 09 April 2026 01:04:04 +0000 (0:00:03.508) 0:00:16.979 ******** 2026-04-09 01:06:24.903791 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-09 01:06:24.903798 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:06:24.903825 | orchestrator | 2026-04-09 01:06:24.903832 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-09 01:06:24.903839 | orchestrator | Thursday 09 April 2026 01:04:08 +0000 (0:00:04.390) 0:00:21.370 ******** 2026-04-09 01:06:24.903846 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:06:24.903853 | orchestrator | 2026-04-09 01:06:24.903860 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-04-09 01:06:24.903867 | orchestrator | Thursday 09 April 2026 01:04:12 +0000 (0:00:03.566) 0:00:24.936 ******** 2026-04-09 01:06:24.903890 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-09 01:06:24.903896 | orchestrator | 2026-04-09 01:06:24.903903 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-09 01:06:24.903909 | orchestrator | Thursday 09 April 2026 01:04:16 +0000 (0:00:04.313) 0:00:29.250 ******** 2026-04-09 01:06:24.903919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.903945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.903953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.903967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.903980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.903988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904137 | orchestrator | 2026-04-09 01:06:24.904143 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-09 01:06:24.904150 | orchestrator | Thursday 09 April 2026 01:04:20 +0000 (0:00:03.750) 0:00:33.001 ******** 2026-04-09 01:06:24.904157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.904163 | orchestrator | 2026-04-09 01:06:24.904169 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-09 01:06:24.904175 | orchestrator | Thursday 09 April 2026 01:04:20 +0000 (0:00:00.112) 0:00:33.113 ******** 2026-04-09 01:06:24.904181 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.904187 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:24.904194 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:24.904200 | orchestrator | 2026-04-09 01:06:24.904206 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:06:24.904212 | orchestrator | Thursday 09 April 2026 01:04:20 +0000 (0:00:00.255) 0:00:33.369 ******** 2026-04-09 01:06:24.904219 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:06:24.904226 | orchestrator | 2026-04-09 01:06:24.904232 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-09 01:06:24.904237 | orchestrator | Thursday 09 April 2026 01:04:20 +0000 (0:00:00.448) 0:00:33.818 ******** 2026-04-09 01:06:24.904249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.904256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.904280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.904287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.904520 | orchestrator | 2026-04-09 01:06:24.904528 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-09 01:06:24.904535 | orchestrator | Thursday 09 April 2026 01:04:26 +0000 (0:00:05.677) 0:00:39.495 ******** 2026-04-09 01:06:24.904550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.904699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.904732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.904742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.904749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.905367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.905390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905455 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.905466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:24.905480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905505 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:24.905512 | orchestrator | 2026-04-09 01:06:24.905518 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-09 01:06:24.905525 | orchestrator | Thursday 09 April 2026 01:04:27 +0000 (0:00:01.123) 0:00:40.619 ******** 2026-04-09 01:06:24.905538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.905545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.905556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.905570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.905586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.905607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.905628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905653 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.905661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905694 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:24.905701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.905721 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:24.905728 | orchestrator | 2026-04-09 01:06:24.905735 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-09 01:06:24.905745 | orchestrator | Thursday 09 April 2026 01:04:29 +0000 (0:00:01.668) 0:00:42.288 ******** 2026-04-09 01:06:24.905753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.905763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.905770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.905782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.905958 | orchestrator | 2026-04-09 01:06:24.905966 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-09 01:06:24.905997 | orchestrator | Thursday 09 April 2026 01:04:36 +0000 (0:00:06.923) 0:00:49.212 ******** 2026-04-09 01:06:24.906007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.906060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.906070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.906085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906320 | orchestrator | 2026-04-09 01:06:24.906340 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-09 01:06:24.906348 | orchestrator | Thursday 09 April 2026 01:04:52 +0000 (0:00:16.117) 0:01:05.329 ******** 2026-04-09 01:06:24.906355 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 01:06:24.906363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 01:06:24.906369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 01:06:24.906377 | orchestrator | 2026-04-09 01:06:24.906384 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-09 01:06:24.906391 | orchestrator | Thursday 09 April 2026 01:04:56 +0000 (0:00:04.380) 0:01:09.710 ******** 2026-04-09 01:06:24.906398 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 01:06:24.906405 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 01:06:24.906412 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 01:06:24.906419 | orchestrator | 2026-04-09 01:06:24.906426 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-09 01:06:24.906432 | orchestrator | Thursday 09 April 2026 01:04:59 +0000 (0:00:02.771) 0:01:12.482 ******** 2026-04-09 01:06:24.906444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906620 | orchestrator | 2026-04-09 01:06:24.906627 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-09 01:06:24.906634 | orchestrator | Thursday 09 April 2026 01:05:02 +0000 (0:00:02.498) 0:01:14.980 ******** 2026-04-09 01:06:24.906644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.906811 | orchestrator | 2026-04-09 01:06:24.906818 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:06:24.906825 | orchestrator | Thursday 09 April 2026 01:05:04 +0000 (0:00:02.580) 0:01:17.561 ******** 2026-04-09 01:06:24.906832 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.906840 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:24.906847 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:24.906854 | orchestrator | 2026-04-09 01:06:24.906861 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-09 01:06:24.906867 | orchestrator | Thursday 09 April 2026 01:05:04 +0000 (0:00:00.232) 0:01:17.794 ******** 2026-04-09 01:06:24.906877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.906893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906933 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.906943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.906951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.906959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.906997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907021 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:24.907032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.907040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.907048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907095 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:24.907123 | orchestrator | 2026-04-09 01:06:24.907129 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-09 01:06:24.907135 | orchestrator | Thursday 09 April 2026 01:05:05 +0000 (0:00:00.754) 0:01:18.549 ******** 2026-04-09 01:06:24.907141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.907148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.907167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:06:24.907175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:06:24.907320 | orchestrator | 2026-04-09 01:06:24.907327 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-09 01:06:24.907335 | orchestrator | Thursday 09 April 2026 01:05:10 +0000 (0:00:04.486) 0:01:23.035 ******** 2026-04-09 01:06:24.907342 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:06:24.907350 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:24.907357 | orchestrator | } 2026-04-09 01:06:24.907364 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:06:24.907371 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:24.907378 | orchestrator | } 2026-04-09 01:06:24.907385 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:06:24.907392 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:06:24.907399 | orchestrator | } 2026-04-09 01:06:24.907406 | orchestrator | 2026-04-09 01:06:24.907412 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:06:24.907419 | orchestrator | Thursday 09 April 2026 01:05:10 +0000 (0:00:00.530) 0:01:23.565 ******** 2026-04-09 01:06:24.907426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.907438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.907451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.907480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.907500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907542 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:24.907550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907565 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.907573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:06:24.907581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:06:24.907594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:06:24.907631 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:24.907638 | orchestrator | 2026-04-09 01:06:24.907645 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:06:24.907652 | orchestrator | Thursday 09 April 2026 01:05:12 +0000 (0:00:01.419) 0:01:24.985 ******** 2026-04-09 01:06:24.907659 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:24.907667 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:24.907674 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:24.907680 | orchestrator | 2026-04-09 01:06:24.907687 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-09 01:06:24.907694 | orchestrator | Thursday 09 April 2026 01:05:12 +0000 (0:00:00.301) 0:01:25.286 ******** 2026-04-09 01:06:24.907702 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-09 01:06:24.907709 | orchestrator | 2026-04-09 01:06:24.907716 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-09 01:06:24.907723 | orchestrator | Thursday 09 April 2026 01:05:15 +0000 (0:00:02.789) 0:01:28.076 ******** 2026-04-09 01:06:24.907730 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 01:06:24.907737 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-09 01:06:24.907743 | orchestrator | 2026-04-09 01:06:24.907750 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-09 01:06:24.907757 | orchestrator | Thursday 09 April 2026 01:05:18 +0000 (0:00:03.035) 0:01:31.111 ******** 2026-04-09 01:06:24.907764 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.907772 | orchestrator | 2026-04-09 01:06:24.907778 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 01:06:24.907785 | orchestrator | Thursday 09 April 2026 01:05:32 +0000 (0:00:13.990) 0:01:45.102 ******** 2026-04-09 01:06:24.907792 | orchestrator | 2026-04-09 01:06:24.907799 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 01:06:24.907806 | orchestrator | Thursday 09 April 2026 01:05:32 +0000 (0:00:00.064) 0:01:45.166 ******** 2026-04-09 01:06:24.907813 | orchestrator | 2026-04-09 01:06:24.907820 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 01:06:24.907827 | orchestrator | Thursday 09 April 2026 01:05:32 +0000 (0:00:00.062) 0:01:45.229 ******** 2026-04-09 01:06:24.907834 | orchestrator | 2026-04-09 01:06:24.907841 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-09 01:06:24.907848 | orchestrator | Thursday 09 April 2026 01:05:32 +0000 (0:00:00.064) 0:01:45.293 ******** 2026-04-09 01:06:24.907855 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.907863 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:24.907870 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:24.907877 | orchestrator | 2026-04-09 01:06:24.908022 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-09 01:06:24.908035 | orchestrator | Thursday 09 April 2026 01:05:39 +0000 (0:00:07.128) 0:01:52.422 ******** 2026-04-09 01:06:24.908042 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.908049 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:24.908056 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:24.908063 | orchestrator | 2026-04-09 01:06:24.908070 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-09 01:06:24.908077 | orchestrator | Thursday 09 April 2026 01:05:44 +0000 (0:00:05.065) 0:01:57.487 ******** 2026-04-09 01:06:24.908085 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.908092 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:24.908154 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:24.908164 | orchestrator | 2026-04-09 01:06:24.908171 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-09 01:06:24.908178 | orchestrator | Thursday 09 April 2026 01:05:54 +0000 (0:00:10.249) 0:02:07.737 ******** 2026-04-09 01:06:24.908185 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.908193 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:24.908201 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:24.908208 | orchestrator | 2026-04-09 01:06:24.908216 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-09 01:06:24.908223 | orchestrator | Thursday 09 April 2026 01:05:59 +0000 (0:00:04.638) 0:02:12.375 ******** 2026-04-09 01:06:24.908230 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.908238 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:24.908246 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:24.908254 | orchestrator | 2026-04-09 01:06:24.908261 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-09 01:06:24.908268 | orchestrator | Thursday 09 April 2026 01:06:04 +0000 (0:00:04.665) 0:02:17.041 ******** 2026-04-09 01:06:24.908274 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.908282 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:24.908290 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:24.908298 | orchestrator | 2026-04-09 01:06:24.908305 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-09 01:06:24.908313 | orchestrator | Thursday 09 April 2026 01:06:14 +0000 (0:00:10.282) 0:02:27.323 ******** 2026-04-09 01:06:24.908321 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:24.908328 | orchestrator | 2026-04-09 01:06:24.908340 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:06:24.908349 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 01:06:24.908357 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:06:24.908363 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:06:24.908371 | orchestrator | 2026-04-09 01:06:24.908378 | orchestrator | 2026-04-09 01:06:24.908385 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:06:24.908392 | orchestrator | Thursday 09 April 2026 01:06:21 +0000 (0:00:07.257) 0:02:34.581 ******** 2026-04-09 01:06:24.908399 | orchestrator | =============================================================================== 2026-04-09 01:06:24.908405 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.12s 2026-04-09 01:06:24.908412 | orchestrator | designate : Running Designate bootstrap container ---------------------- 13.99s 2026-04-09 01:06:24.908420 | orchestrator | designate : Restart designate-worker container ------------------------- 10.28s 2026-04-09 01:06:24.908430 | orchestrator | designate : Restart designate-central container ------------------------ 10.25s 2026-04-09 01:06:24.908437 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 7.30s 2026-04-09 01:06:24.908445 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.26s 2026-04-09 01:06:24.908452 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.13s 2026-04-09 01:06:24.908460 | orchestrator | designate : Copying over config.json files for services ----------------- 6.92s 2026-04-09 01:06:24.908467 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.68s 2026-04-09 01:06:24.908475 | orchestrator | designate : Restart designate-api container ----------------------------- 5.07s 2026-04-09 01:06:24.908482 | orchestrator | designate : Restart designate-mdns container ---------------------------- 4.67s 2026-04-09 01:06:24.908490 | orchestrator | designate : Restart designate-producer container ------------------------ 4.64s 2026-04-09 01:06:24.908502 | orchestrator | service-check-containers : designate | Check containers ----------------- 4.49s 2026-04-09 01:06:24.908509 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.39s 2026-04-09 01:06:24.908515 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.38s 2026-04-09 01:06:24.908522 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 4.31s 2026-04-09 01:06:24.908530 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 4.00s 2026-04-09 01:06:24.908537 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.75s 2026-04-09 01:06:24.908544 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.57s 2026-04-09 01:06:24.908552 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.51s 2026-04-09 01:06:24.908559 | orchestrator | 2026-04-09 01:06:24 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:24.908574 | orchestrator | 2026-04-09 01:06:24 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:24.908581 | orchestrator | 2026-04-09 01:06:24 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:24.908590 | orchestrator | 2026-04-09 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:27.942889 | orchestrator | 2026-04-09 01:06:27 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:27.943157 | orchestrator | 2026-04-09 01:06:27 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:27.943921 | orchestrator | 2026-04-09 01:06:27 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:27.944740 | orchestrator | 2026-04-09 01:06:27 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:27.944787 | orchestrator | 2026-04-09 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:30.971702 | orchestrator | 2026-04-09 01:06:30 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:30.973116 | orchestrator | 2026-04-09 01:06:30 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:30.975522 | orchestrator | 2026-04-09 01:06:30 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:30.976498 | orchestrator | 2026-04-09 01:06:30 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:30.976529 | orchestrator | 2026-04-09 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:34.001151 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:34.001476 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:34.002217 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:34.002973 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:34.002997 | orchestrator | 2026-04-09 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:37.039375 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:37.039421 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:37.039975 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:37.040663 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:37.040696 | orchestrator | 2026-04-09 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:40.072732 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:40.072784 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:40.073293 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:40.074128 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:40.074158 | orchestrator | 2026-04-09 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:43.103221 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:43.103336 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:43.104379 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:43.105025 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:43.105051 | orchestrator | 2026-04-09 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:46.135227 | orchestrator | 2026-04-09 01:06:46 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:46.135277 | orchestrator | 2026-04-09 01:06:46 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:46.136649 | orchestrator | 2026-04-09 01:06:46 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:46.136986 | orchestrator | 2026-04-09 01:06:46 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:46.137115 | orchestrator | 2026-04-09 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:49.161335 | orchestrator | 2026-04-09 01:06:49 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:49.162386 | orchestrator | 2026-04-09 01:06:49 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:49.163192 | orchestrator | 2026-04-09 01:06:49 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:49.163831 | orchestrator | 2026-04-09 01:06:49 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:49.164051 | orchestrator | 2026-04-09 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:52.198102 | orchestrator | 2026-04-09 01:06:52 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:52.200200 | orchestrator | 2026-04-09 01:06:52 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:52.201254 | orchestrator | 2026-04-09 01:06:52 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:52.202260 | orchestrator | 2026-04-09 01:06:52 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:52.202290 | orchestrator | 2026-04-09 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:55.229889 | orchestrator | 2026-04-09 01:06:55 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:55.230501 | orchestrator | 2026-04-09 01:06:55 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:55.231206 | orchestrator | 2026-04-09 01:06:55 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:55.232156 | orchestrator | 2026-04-09 01:06:55 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:55.232188 | orchestrator | 2026-04-09 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:58.258285 | orchestrator | 2026-04-09 01:06:58 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:06:58.260312 | orchestrator | 2026-04-09 01:06:58 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:06:58.261855 | orchestrator | 2026-04-09 01:06:58 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:06:58.263086 | orchestrator | 2026-04-09 01:06:58 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:06:58.263288 | orchestrator | 2026-04-09 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:01.298526 | orchestrator | 2026-04-09 01:07:01 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:07:01.298614 | orchestrator | 2026-04-09 01:07:01 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:01.298669 | orchestrator | 2026-04-09 01:07:01 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:01.300401 | orchestrator | 2026-04-09 01:07:01 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:01.300447 | orchestrator | 2026-04-09 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:04.345816 | orchestrator | 2026-04-09 01:07:04 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state STARTED 2026-04-09 01:07:04.345915 | orchestrator | 2026-04-09 01:07:04 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:04.346714 | orchestrator | 2026-04-09 01:07:04 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:04.348267 | orchestrator | 2026-04-09 01:07:04 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:04.348313 | orchestrator | 2026-04-09 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:07.395140 | orchestrator | 2026-04-09 01:07:07 | INFO  | Task e62df616-bcd8-443c-ae2a-0fe04712ef43 is in state SUCCESS 2026-04-09 01:07:07.396027 | orchestrator | 2026-04-09 01:07:07.396113 | orchestrator | 2026-04-09 01:07:07.396124 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:07:07.396129 | orchestrator | 2026-04-09 01:07:07.396133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:07:07.396138 | orchestrator | Thursday 09 April 2026 01:05:52 +0000 (0:00:00.292) 0:00:00.292 ******** 2026-04-09 01:07:07.396145 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:07.396153 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:07.396163 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:07.396169 | orchestrator | 2026-04-09 01:07:07.396175 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:07:07.396181 | orchestrator | Thursday 09 April 2026 01:05:52 +0000 (0:00:00.252) 0:00:00.544 ******** 2026-04-09 01:07:07.396188 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-09 01:07:07.396195 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-09 01:07:07.396202 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-09 01:07:07.396208 | orchestrator | 2026-04-09 01:07:07.396214 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-09 01:07:07.396238 | orchestrator | 2026-04-09 01:07:07.396245 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 01:07:07.396251 | orchestrator | Thursday 09 April 2026 01:05:53 +0000 (0:00:00.287) 0:00:00.831 ******** 2026-04-09 01:07:07.396256 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:07.396260 | orchestrator | 2026-04-09 01:07:07.396264 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-09 01:07:07.396268 | orchestrator | Thursday 09 April 2026 01:05:53 +0000 (0:00:00.605) 0:00:01.436 ******** 2026-04-09 01:07:07.396272 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-09 01:07:07.396275 | orchestrator | 2026-04-09 01:07:07.396279 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-09 01:07:07.396283 | orchestrator | Thursday 09 April 2026 01:05:57 +0000 (0:00:03.354) 0:00:04.791 ******** 2026-04-09 01:07:07.396287 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-09 01:07:07.396291 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-09 01:07:07.396295 | orchestrator | 2026-04-09 01:07:07.396299 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-09 01:07:07.396303 | orchestrator | Thursday 09 April 2026 01:06:03 +0000 (0:00:06.126) 0:00:10.918 ******** 2026-04-09 01:07:07.396313 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:07:07.396317 | orchestrator | 2026-04-09 01:07:07.396321 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-09 01:07:07.396325 | orchestrator | Thursday 09 April 2026 01:06:06 +0000 (0:00:02.823) 0:00:13.742 ******** 2026-04-09 01:07:07.396329 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-09 01:07:07.396342 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:07:07.396352 | orchestrator | 2026-04-09 01:07:07.396361 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-09 01:07:07.396369 | orchestrator | Thursday 09 April 2026 01:06:10 +0000 (0:00:03.856) 0:00:17.598 ******** 2026-04-09 01:07:07.396375 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:07:07.396381 | orchestrator | 2026-04-09 01:07:07.396387 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-09 01:07:07.396394 | orchestrator | Thursday 09 April 2026 01:06:13 +0000 (0:00:03.572) 0:00:21.170 ******** 2026-04-09 01:07:07.396400 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-09 01:07:07.396406 | orchestrator | 2026-04-09 01:07:07.396413 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 01:07:07.396419 | orchestrator | Thursday 09 April 2026 01:06:17 +0000 (0:00:03.980) 0:00:25.151 ******** 2026-04-09 01:07:07.396426 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.396432 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:07.396438 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:07.396445 | orchestrator | 2026-04-09 01:07:07.396451 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-09 01:07:07.396457 | orchestrator | Thursday 09 April 2026 01:06:18 +0000 (0:00:00.485) 0:00:25.636 ******** 2026-04-09 01:07:07.396479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396513 | orchestrator | 2026-04-09 01:07:07.396519 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-09 01:07:07.396525 | orchestrator | Thursday 09 April 2026 01:06:19 +0000 (0:00:01.838) 0:00:27.474 ******** 2026-04-09 01:07:07.396531 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.396537 | orchestrator | 2026-04-09 01:07:07.396544 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-09 01:07:07.396551 | orchestrator | Thursday 09 April 2026 01:06:20 +0000 (0:00:00.108) 0:00:27.582 ******** 2026-04-09 01:07:07.396557 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.396564 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:07.396570 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:07.396577 | orchestrator | 2026-04-09 01:07:07.396584 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 01:07:07.396590 | orchestrator | Thursday 09 April 2026 01:06:20 +0000 (0:00:00.226) 0:00:27.809 ******** 2026-04-09 01:07:07.396597 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:07.396603 | orchestrator | 2026-04-09 01:07:07.396610 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-09 01:07:07.396616 | orchestrator | Thursday 09 April 2026 01:06:20 +0000 (0:00:00.641) 0:00:28.450 ******** 2026-04-09 01:07:07.396623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396660 | orchestrator | 2026-04-09 01:07:07.396666 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-09 01:07:07.396671 | orchestrator | Thursday 09 April 2026 01:06:22 +0000 (0:00:02.034) 0:00:30.484 ******** 2026-04-09 01:07:07.396677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.396695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.396704 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.396710 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:07.396717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.396724 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:07.396730 | orchestrator | 2026-04-09 01:07:07.396736 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-09 01:07:07.396742 | orchestrator | Thursday 09 April 2026 01:06:23 +0000 (0:00:00.936) 0:00:31.421 ******** 2026-04-09 01:07:07.396759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.396767 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.396774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.396785 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:07.396795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.396801 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:07.396806 | orchestrator | 2026-04-09 01:07:07.396810 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-09 01:07:07.396815 | orchestrator | Thursday 09 April 2026 01:06:24 +0000 (0:00:01.125) 0:00:32.546 ******** 2026-04-09 01:07:07.396820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396841 | orchestrator | 2026-04-09 01:07:07.396844 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-09 01:07:07.396849 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:01.537) 0:00:34.083 ******** 2026-04-09 01:07:07.396856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.396874 | orchestrator | 2026-04-09 01:07:07.396878 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-09 01:07:07.396882 | orchestrator | Thursday 09 April 2026 01:06:29 +0000 (0:00:02.567) 0:00:36.651 ******** 2026-04-09 01:07:07.396886 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-09 01:07:07.396891 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.396895 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-09 01:07:07.396899 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:07.396902 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-09 01:07:07.396906 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:07.396910 | orchestrator | 2026-04-09 01:07:07.396914 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-09 01:07:07.396918 | orchestrator | Thursday 09 April 2026 01:06:29 +0000 (0:00:00.421) 0:00:37.073 ******** 2026-04-09 01:07:07.396922 | orchestrator | included: service-uwsgi-config for testbed-node-1, testbed-node-0, testbed-node-2 2026-04-09 01:07:07.396926 | orchestrator | 2026-04-09 01:07:07.396930 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-09 01:07:07.396936 | orchestrator | Thursday 09 April 2026 01:06:30 +0000 (0:00:00.634) 0:00:37.707 ******** 2026-04-09 01:07:07.396940 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:07.396944 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:07.396948 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:07.396951 | orchestrator | 2026-04-09 01:07:07.396957 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-09 01:07:07.396963 | orchestrator | Thursday 09 April 2026 01:06:31 +0000 (0:00:01.565) 0:00:39.273 ******** 2026-04-09 01:07:07.396969 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:07.396975 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:07.396980 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:07.396986 | orchestrator | 2026-04-09 01:07:07.396991 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-09 01:07:07.396997 | orchestrator | Thursday 09 April 2026 01:06:33 +0000 (0:00:01.297) 0:00:40.570 ******** 2026-04-09 01:07:07.397003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.397013 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.397026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.397046 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:07.397053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.397060 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:07.397065 | orchestrator | 2026-04-09 01:07:07.397071 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-09 01:07:07.397077 | orchestrator | Thursday 09 April 2026 01:06:34 +0000 (0:00:01.160) 0:00:41.731 ******** 2026-04-09 01:07:07.397089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.397112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.397121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 01:07:07.397125 | orchestrator | 2026-04-09 01:07:07.397129 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-09 01:07:07.397133 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:01.072) 0:00:42.803 ******** 2026-04-09 01:07:07.397137 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:07:07.397141 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:07.397145 | orchestrator | } 2026-04-09 01:07:07.397149 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:07:07.397153 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:07.397157 | orchestrator | } 2026-04-09 01:07:07.397161 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:07:07.397165 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:07.397169 | orchestrator | } 2026-04-09 01:07:07.397173 | orchestrator | 2026-04-09 01:07:07.397176 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:07:07.397180 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:00.341) 0:00:43.145 ******** 2026-04-09 01:07:07.397188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.397193 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:07.397200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.397204 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:07.397212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 01:07:07.397219 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:07.397226 | orchestrator | 2026-04-09 01:07:07.397233 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-09 01:07:07.397240 | orchestrator | Thursday 09 April 2026 01:06:36 +0000 (0:00:00.783) 0:00:43.928 ******** 2026-04-09 01:07:07.397245 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:07.397249 | orchestrator | 2026-04-09 01:07:07.397253 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-09 01:07:07.397257 | orchestrator | Thursday 09 April 2026 01:06:38 +0000 (0:00:02.310) 0:00:46.239 ******** 2026-04-09 01:07:07.397262 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:07.397269 | orchestrator | 2026-04-09 01:07:07.397275 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-09 01:07:07.397281 | orchestrator | Thursday 09 April 2026 01:06:40 +0000 (0:00:02.063) 0:00:48.302 ******** 2026-04-09 01:07:07.397287 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:07.397293 | orchestrator | 2026-04-09 01:07:07.397347 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 01:07:07.397358 | orchestrator | Thursday 09 April 2026 01:06:54 +0000 (0:00:13.634) 0:01:01.937 ******** 2026-04-09 01:07:07.397365 | orchestrator | 2026-04-09 01:07:07.397372 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 01:07:07.397378 | orchestrator | Thursday 09 April 2026 01:06:54 +0000 (0:00:00.073) 0:01:02.011 ******** 2026-04-09 01:07:07.397386 | orchestrator | 2026-04-09 01:07:07.397394 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 01:07:07.397401 | orchestrator | Thursday 09 April 2026 01:06:54 +0000 (0:00:00.069) 0:01:02.080 ******** 2026-04-09 01:07:07.397405 | orchestrator | 2026-04-09 01:07:07.397409 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-09 01:07:07.397417 | orchestrator | Thursday 09 April 2026 01:06:54 +0000 (0:00:00.077) 0:01:02.157 ******** 2026-04-09 01:07:07.397421 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:07.397425 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:07.397429 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:07.397433 | orchestrator | 2026-04-09 01:07:07.397442 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:07:07.397446 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 01:07:07.397451 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:07:07.397455 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:07:07.397459 | orchestrator | 2026-04-09 01:07:07.397463 | orchestrator | 2026-04-09 01:07:07.397467 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:07:07.397471 | orchestrator | Thursday 09 April 2026 01:07:04 +0000 (0:00:10.366) 0:01:12.524 ******** 2026-04-09 01:07:07.397475 | orchestrator | =============================================================================== 2026-04-09 01:07:07.397478 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.63s 2026-04-09 01:07:07.397482 | orchestrator | placement : Restart placement-api container ---------------------------- 10.37s 2026-04-09 01:07:07.397486 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 6.13s 2026-04-09 01:07:07.397490 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 3.98s 2026-04-09 01:07:07.397494 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.86s 2026-04-09 01:07:07.397498 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.57s 2026-04-09 01:07:07.397501 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 3.35s 2026-04-09 01:07:07.397505 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.82s 2026-04-09 01:07:07.397509 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.57s 2026-04-09 01:07:07.397513 | orchestrator | placement : Creating placement databases -------------------------------- 2.31s 2026-04-09 01:07:07.397517 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.07s 2026-04-09 01:07:07.397521 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.03s 2026-04-09 01:07:07.397527 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.84s 2026-04-09 01:07:07.397531 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 1.57s 2026-04-09 01:07:07.397535 | orchestrator | placement : Copying over config.json files for services ----------------- 1.54s 2026-04-09 01:07:07.397539 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2026-04-09 01:07:07.397542 | orchestrator | placement : Copying over existing policy file --------------------------- 1.16s 2026-04-09 01:07:07.397546 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.13s 2026-04-09 01:07:07.397550 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.07s 2026-04-09 01:07:07.397554 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.94s 2026-04-09 01:07:07.397618 | orchestrator | 2026-04-09 01:07:07 | INFO  | Task e09e76a2-a560-44fc-8143-38e82276b179 is in state STARTED 2026-04-09 01:07:07.402172 | orchestrator | 2026-04-09 01:07:07 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:07.402980 | orchestrator | 2026-04-09 01:07:07 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:07.406383 | orchestrator | 2026-04-09 01:07:07 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:07.406603 | orchestrator | 2026-04-09 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:10.435210 | orchestrator | 2026-04-09 01:07:10 | INFO  | Task e09e76a2-a560-44fc-8143-38e82276b179 is in state STARTED 2026-04-09 01:07:10.436938 | orchestrator | 2026-04-09 01:07:10 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:10.436994 | orchestrator | 2026-04-09 01:07:10 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:10.438597 | orchestrator | 2026-04-09 01:07:10 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:10.438640 | orchestrator | 2026-04-09 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:13.473076 | orchestrator | 2026-04-09 01:07:13 | INFO  | Task e09e76a2-a560-44fc-8143-38e82276b179 is in state SUCCESS 2026-04-09 01:07:13.475110 | orchestrator | 2026-04-09 01:07:13 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:13.475319 | orchestrator | 2026-04-09 01:07:13 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:13.476180 | orchestrator | 2026-04-09 01:07:13 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:13.477068 | orchestrator | 2026-04-09 01:07:13 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:13.477098 | orchestrator | 2026-04-09 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:16.506221 | orchestrator | 2026-04-09 01:07:16 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:16.508406 | orchestrator | 2026-04-09 01:07:16 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:16.509657 | orchestrator | 2026-04-09 01:07:16 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:16.510938 | orchestrator | 2026-04-09 01:07:16 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:16.510973 | orchestrator | 2026-04-09 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:19.542243 | orchestrator | 2026-04-09 01:07:19 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:19.543878 | orchestrator | 2026-04-09 01:07:19 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:19.544392 | orchestrator | 2026-04-09 01:07:19 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:19.545227 | orchestrator | 2026-04-09 01:07:19 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:19.545256 | orchestrator | 2026-04-09 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:22.575827 | orchestrator | 2026-04-09 01:07:22 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:22.578283 | orchestrator | 2026-04-09 01:07:22 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:22.578742 | orchestrator | 2026-04-09 01:07:22 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:22.580651 | orchestrator | 2026-04-09 01:07:22 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:22.580705 | orchestrator | 2026-04-09 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:25.610600 | orchestrator | 2026-04-09 01:07:25 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:25.612438 | orchestrator | 2026-04-09 01:07:25 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:25.613111 | orchestrator | 2026-04-09 01:07:25 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:25.613831 | orchestrator | 2026-04-09 01:07:25 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:25.613907 | orchestrator | 2026-04-09 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:28.639730 | orchestrator | 2026-04-09 01:07:28 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:28.641347 | orchestrator | 2026-04-09 01:07:28 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:28.643171 | orchestrator | 2026-04-09 01:07:28 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:28.644875 | orchestrator | 2026-04-09 01:07:28 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:28.645280 | orchestrator | 2026-04-09 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:31.672245 | orchestrator | 2026-04-09 01:07:31 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:31.673066 | orchestrator | 2026-04-09 01:07:31 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:31.673589 | orchestrator | 2026-04-09 01:07:31 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:31.674441 | orchestrator | 2026-04-09 01:07:31 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:31.674767 | orchestrator | 2026-04-09 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:34.710693 | orchestrator | 2026-04-09 01:07:34 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:34.712733 | orchestrator | 2026-04-09 01:07:34 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:34.714605 | orchestrator | 2026-04-09 01:07:34 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state STARTED 2026-04-09 01:07:34.716035 | orchestrator | 2026-04-09 01:07:34 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:34.716084 | orchestrator | 2026-04-09 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:37.802212 | orchestrator | 2026-04-09 01:07:37 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:37.803356 | orchestrator | 2026-04-09 01:07:37 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:37.804500 | orchestrator | 2026-04-09 01:07:37 | INFO  | Task 26c6d1a1-749f-4545-8161-35cd71b729ca is in state SUCCESS 2026-04-09 01:07:37.805672 | orchestrator | 2026-04-09 01:07:37.805724 | orchestrator | 2026-04-09 01:07:37.805739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:07:37.805751 | orchestrator | 2026-04-09 01:07:37.805762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:07:37.805774 | orchestrator | Thursday 09 April 2026 01:07:08 +0000 (0:00:00.178) 0:00:00.178 ******** 2026-04-09 01:07:37.805785 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:37.805816 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:37.805826 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:37.805836 | orchestrator | 2026-04-09 01:07:37.805846 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:07:37.805890 | orchestrator | Thursday 09 April 2026 01:07:08 +0000 (0:00:00.353) 0:00:00.532 ******** 2026-04-09 01:07:37.805922 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-09 01:07:37.805933 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-09 01:07:37.805944 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-09 01:07:37.805954 | orchestrator | 2026-04-09 01:07:37.805963 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-09 01:07:37.805974 | orchestrator | 2026-04-09 01:07:37.806114 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-09 01:07:37.806131 | orchestrator | Thursday 09 April 2026 01:07:09 +0000 (0:00:00.606) 0:00:01.139 ******** 2026-04-09 01:07:37.806141 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:37.806152 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:37.806161 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:37.806171 | orchestrator | 2026-04-09 01:07:37.806181 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:07:37.806191 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:07:37.806211 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:07:37.806222 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:07:37.806259 | orchestrator | 2026-04-09 01:07:37.806269 | orchestrator | 2026-04-09 01:07:37.806279 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:07:37.806289 | orchestrator | Thursday 09 April 2026 01:07:10 +0000 (0:00:01.079) 0:00:02.218 ******** 2026-04-09 01:07:37.806300 | orchestrator | =============================================================================== 2026-04-09 01:07:37.806310 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.08s 2026-04-09 01:07:37.806321 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-04-09 01:07:37.806331 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-09 01:07:37.806342 | orchestrator | 2026-04-09 01:07:37.806352 | orchestrator | 2026-04-09 01:07:37.806362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:07:37.806372 | orchestrator | 2026-04-09 01:07:37.806384 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:07:37.806395 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-04-09 01:07:37.806406 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:37.806419 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:37.806430 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:37.806441 | orchestrator | 2026-04-09 01:07:37.806452 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:07:37.806463 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:00.293) 0:00:00.567 ******** 2026-04-09 01:07:37.806471 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-09 01:07:37.806478 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-09 01:07:37.806485 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-09 01:07:37.806491 | orchestrator | 2026-04-09 01:07:37.806497 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-09 01:07:37.806503 | orchestrator | 2026-04-09 01:07:37.806509 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 01:07:37.806516 | orchestrator | Thursday 09 April 2026 01:06:27 +0000 (0:00:00.302) 0:00:00.869 ******** 2026-04-09 01:07:37.806522 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:37.806529 | orchestrator | 2026-04-09 01:07:37.806535 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-09 01:07:37.806541 | orchestrator | Thursday 09 April 2026 01:06:27 +0000 (0:00:00.609) 0:00:01.478 ******** 2026-04-09 01:07:37.806558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.806579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.806591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.806598 | orchestrator | 2026-04-09 01:07:37.806604 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-09 01:07:37.806613 | orchestrator | Thursday 09 April 2026 01:06:28 +0000 (0:00:01.201) 0:00:02.679 ******** 2026-04-09 01:07:37.806624 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:07:37.806635 | orchestrator | 2026-04-09 01:07:37.806645 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 01:07:37.806656 | orchestrator | Thursday 09 April 2026 01:06:29 +0000 (0:00:00.744) 0:00:03.424 ******** 2026-04-09 01:07:37.806666 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:37.806677 | orchestrator | 2026-04-09 01:07:37.806687 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-09 01:07:37.806698 | orchestrator | Thursday 09 April 2026 01:06:30 +0000 (0:00:00.483) 0:00:03.907 ******** 2026-04-09 01:07:37.806709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.806728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.806747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.806759 | orchestrator | 2026-04-09 01:07:37.806770 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-09 01:07:37.806781 | orchestrator | Thursday 09 April 2026 01:06:31 +0000 (0:00:01.373) 0:00:05.281 ******** 2026-04-09 01:07:37.806791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.806802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:37.806817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.806829 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:37.806839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.806856 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:37.806867 | orchestrator | 2026-04-09 01:07:37.806878 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-09 01:07:37.806888 | orchestrator | Thursday 09 April 2026 01:06:31 +0000 (0:00:00.310) 0:00:05.591 ******** 2026-04-09 01:07:37.806899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.806910 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:37.806927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.806938 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:37.806950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.806960 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:37.806971 | orchestrator | 2026-04-09 01:07:37.806981 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-09 01:07:37.807012 | orchestrator | Thursday 09 April 2026 01:06:32 +0000 (0:00:00.523) 0:00:06.115 ******** 2026-04-09 01:07:37.807027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807067 | orchestrator | 2026-04-09 01:07:37.807078 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-09 01:07:37.807088 | orchestrator | Thursday 09 April 2026 01:06:33 +0000 (0:00:01.361) 0:00:07.476 ******** 2026-04-09 01:07:37.807105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807148 | orchestrator | 2026-04-09 01:07:37.807158 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-09 01:07:37.807169 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:01.400) 0:00:08.877 ******** 2026-04-09 01:07:37.807179 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:37.807190 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:37.807200 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:37.807211 | orchestrator | 2026-04-09 01:07:37.807221 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-09 01:07:37.807231 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:00.245) 0:00:09.123 ******** 2026-04-09 01:07:37.807242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 01:07:37.807252 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 01:07:37.807262 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 01:07:37.807273 | orchestrator | 2026-04-09 01:07:37.807283 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-09 01:07:37.807293 | orchestrator | Thursday 09 April 2026 01:06:36 +0000 (0:00:00.987) 0:00:10.111 ******** 2026-04-09 01:07:37.807302 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 01:07:37.807312 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 01:07:37.807320 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 01:07:37.807330 | orchestrator | 2026-04-09 01:07:37.807339 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-09 01:07:37.807348 | orchestrator | Thursday 09 April 2026 01:06:37 +0000 (0:00:01.312) 0:00:11.423 ******** 2026-04-09 01:07:37.807358 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:07:37.807366 | orchestrator | 2026-04-09 01:07:37.807376 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-09 01:07:37.807385 | orchestrator | Thursday 09 April 2026 01:06:38 +0000 (0:00:01.001) 0:00:12.425 ******** 2026-04-09 01:07:37.807394 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:37.807403 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:37.807412 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:37.807421 | orchestrator | 2026-04-09 01:07:37.807430 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-09 01:07:37.807439 | orchestrator | Thursday 09 April 2026 01:06:39 +0000 (0:00:00.614) 0:00:13.039 ******** 2026-04-09 01:07:37.807448 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:37.807457 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:37.807466 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:37.807475 | orchestrator | 2026-04-09 01:07:37.807484 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-09 01:07:37.807497 | orchestrator | Thursday 09 April 2026 01:06:40 +0000 (0:00:01.194) 0:00:14.234 ******** 2026-04-09 01:07:37.807507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:37.807544 | orchestrator | 2026-04-09 01:07:37.807554 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-09 01:07:37.807563 | orchestrator | Thursday 09 April 2026 01:06:42 +0000 (0:00:01.551) 0:00:15.786 ******** 2026-04-09 01:07:37.807572 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:07:37.807582 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:37.807591 | orchestrator | } 2026-04-09 01:07:37.807601 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:07:37.807609 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:37.807619 | orchestrator | } 2026-04-09 01:07:37.807627 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:07:37.807637 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:37.807646 | orchestrator | } 2026-04-09 01:07:37.807654 | orchestrator | 2026-04-09 01:07:37.807663 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:07:37.807672 | orchestrator | Thursday 09 April 2026 01:06:42 +0000 (0:00:00.305) 0:00:16.091 ******** 2026-04-09 01:07:37.807681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.807691 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:37.807705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.807728 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:37.807738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:37.807749 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:37.807758 | orchestrator | 2026-04-09 01:07:37.807766 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-09 01:07:37.807775 | orchestrator | Thursday 09 April 2026 01:06:43 +0000 (0:00:01.591) 0:00:17.683 ******** 2026-04-09 01:07:37.807787 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:37.807796 | orchestrator | 2026-04-09 01:07:37.807805 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-09 01:07:37.807813 | orchestrator | Thursday 09 April 2026 01:06:46 +0000 (0:00:02.249) 0:00:19.933 ******** 2026-04-09 01:07:37.807822 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:37.807831 | orchestrator | 2026-04-09 01:07:37.807840 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 01:07:37.807848 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:02.070) 0:00:22.003 ******** 2026-04-09 01:07:37.807857 | orchestrator | 2026-04-09 01:07:37.807867 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 01:07:37.807875 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:00.059) 0:00:22.062 ******** 2026-04-09 01:07:37.807884 | orchestrator | 2026-04-09 01:07:37.807893 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 01:07:37.807901 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:00.059) 0:00:22.121 ******** 2026-04-09 01:07:37.807910 | orchestrator | 2026-04-09 01:07:37.807918 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-09 01:07:37.807927 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:00.059) 0:00:22.181 ******** 2026-04-09 01:07:37.807935 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:37.807944 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:37.807953 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:37.807962 | orchestrator | 2026-04-09 01:07:37.807971 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-09 01:07:37.807979 | orchestrator | Thursday 09 April 2026 01:06:50 +0000 (0:00:01.857) 0:00:24.039 ******** 2026-04-09 01:07:37.808002 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:37.808012 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:37.808021 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-09 01:07:37.808030 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:37.808039 | orchestrator | 2026-04-09 01:07:37.808048 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-09 01:07:37.808056 | orchestrator | Thursday 09 April 2026 01:07:05 +0000 (0:00:14.931) 0:00:38.970 ******** 2026-04-09 01:07:37.808065 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:37.808074 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:37.808083 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:37.808092 | orchestrator | 2026-04-09 01:07:37.808101 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-09 01:07:37.808115 | orchestrator | Thursday 09 April 2026 01:07:29 +0000 (0:00:24.549) 0:01:03.519 ******** 2026-04-09 01:07:37.808124 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:37.808132 | orchestrator | 2026-04-09 01:07:37.808141 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-09 01:07:37.808150 | orchestrator | Thursday 09 April 2026 01:07:32 +0000 (0:00:02.444) 0:01:05.964 ******** 2026-04-09 01:07:37.808159 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:37.808167 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:37.808176 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:37.808185 | orchestrator | 2026-04-09 01:07:37.808193 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-09 01:07:37.808202 | orchestrator | Thursday 09 April 2026 01:07:32 +0000 (0:00:00.267) 0:01:06.232 ******** 2026-04-09 01:07:37.808211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-09 01:07:37.808228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-09 01:07:37.808238 | orchestrator | 2026-04-09 01:07:37.808246 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-09 01:07:37.808255 | orchestrator | Thursday 09 April 2026 01:07:34 +0000 (0:00:02.371) 0:01:08.603 ******** 2026-04-09 01:07:37.808264 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:37.808274 | orchestrator | 2026-04-09 01:07:37.808282 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:07:37.808293 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:07:37.808303 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:07:37.808312 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:07:37.808323 | orchestrator | 2026-04-09 01:07:37.808332 | orchestrator | 2026-04-09 01:07:37.808340 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:07:37.808346 | orchestrator | Thursday 09 April 2026 01:07:35 +0000 (0:00:00.364) 0:01:08.967 ******** 2026-04-09 01:07:37.808352 | orchestrator | =============================================================================== 2026-04-09 01:07:37.808358 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.55s 2026-04-09 01:07:37.808363 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 14.93s 2026-04-09 01:07:37.808372 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.44s 2026-04-09 01:07:37.808378 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.37s 2026-04-09 01:07:37.808383 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.25s 2026-04-09 01:07:37.808389 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.07s 2026-04-09 01:07:37.808395 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2026-04-09 01:07:37.808400 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.59s 2026-04-09 01:07:37.808406 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.55s 2026-04-09 01:07:37.808411 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.40s 2026-04-09 01:07:37.808421 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.37s 2026-04-09 01:07:37.808427 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2026-04-09 01:07:37.808433 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.31s 2026-04-09 01:07:37.808438 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.20s 2026-04-09 01:07:37.808444 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.20s 2026-04-09 01:07:37.808450 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 1.00s 2026-04-09 01:07:37.808456 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 0.99s 2026-04-09 01:07:37.808461 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.74s 2026-04-09 01:07:37.808467 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.61s 2026-04-09 01:07:37.808472 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.61s 2026-04-09 01:07:37.808477 | orchestrator | 2026-04-09 01:07:37 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:37.808547 | orchestrator | 2026-04-09 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:40.843642 | orchestrator | 2026-04-09 01:07:40 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:40.844122 | orchestrator | 2026-04-09 01:07:40 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:40.844889 | orchestrator | 2026-04-09 01:07:40 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:40.844935 | orchestrator | 2026-04-09 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:43.876718 | orchestrator | 2026-04-09 01:07:43 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:43.880746 | orchestrator | 2026-04-09 01:07:43 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:43.883366 | orchestrator | 2026-04-09 01:07:43 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state STARTED 2026-04-09 01:07:43.883599 | orchestrator | 2026-04-09 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:46.922947 | orchestrator | 2026-04-09 01:07:46 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:46.923295 | orchestrator | 2026-04-09 01:07:46 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:46.924852 | orchestrator | 2026-04-09 01:07:46 | INFO  | Task 14e086ac-2fab-47e3-a7ac-fab804f9f10c is in state SUCCESS 2026-04-09 01:07:46.926280 | orchestrator | 2026-04-09 01:07:46.926314 | orchestrator | 2026-04-09 01:07:46.926322 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:07:46.926330 | orchestrator | 2026-04-09 01:07:46.926337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:07:46.926344 | orchestrator | Thursday 09 April 2026 01:06:01 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-09 01:07:46.926351 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:46.926355 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:46.926362 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:46.926368 | orchestrator | 2026-04-09 01:07:46.926377 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:07:46.926386 | orchestrator | Thursday 09 April 2026 01:06:02 +0000 (0:00:00.246) 0:00:00.524 ******** 2026-04-09 01:07:46.926391 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-09 01:07:46.926398 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-09 01:07:46.926404 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-09 01:07:46.926425 | orchestrator | 2026-04-09 01:07:46.926432 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-09 01:07:46.926437 | orchestrator | 2026-04-09 01:07:46.926443 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 01:07:46.926448 | orchestrator | Thursday 09 April 2026 01:06:02 +0000 (0:00:00.275) 0:00:00.799 ******** 2026-04-09 01:07:46.926455 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:46.926462 | orchestrator | 2026-04-09 01:07:46.926468 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-09 01:07:46.926483 | orchestrator | Thursday 09 April 2026 01:06:02 +0000 (0:00:00.542) 0:00:01.341 ******** 2026-04-09 01:07:46.926490 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-09 01:07:46.926497 | orchestrator | 2026-04-09 01:07:46.926503 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-04-09 01:07:46.926509 | orchestrator | Thursday 09 April 2026 01:06:06 +0000 (0:00:03.316) 0:00:04.658 ******** 2026-04-09 01:07:46.926515 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-09 01:07:46.926521 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-09 01:07:46.926525 | orchestrator | 2026-04-09 01:07:46.926529 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-09 01:07:46.926533 | orchestrator | Thursday 09 April 2026 01:06:12 +0000 (0:00:06.544) 0:00:11.202 ******** 2026-04-09 01:07:46.926537 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:07:46.926540 | orchestrator | 2026-04-09 01:07:46.926544 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-09 01:07:46.926548 | orchestrator | Thursday 09 April 2026 01:06:16 +0000 (0:00:03.398) 0:00:14.600 ******** 2026-04-09 01:07:46.926552 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-09 01:07:46.926556 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:07:46.926560 | orchestrator | 2026-04-09 01:07:46.926564 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-09 01:07:46.926568 | orchestrator | Thursday 09 April 2026 01:06:19 +0000 (0:00:03.776) 0:00:18.377 ******** 2026-04-09 01:07:46.926572 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:07:46.926576 | orchestrator | 2026-04-09 01:07:46.926579 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-04-09 01:07:46.926583 | orchestrator | Thursday 09 April 2026 01:06:23 +0000 (0:00:03.449) 0:00:21.827 ******** 2026-04-09 01:07:46.926587 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-09 01:07:46.926591 | orchestrator | 2026-04-09 01:07:46.926595 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-09 01:07:46.926762 | orchestrator | Thursday 09 April 2026 01:06:28 +0000 (0:00:04.767) 0:00:26.594 ******** 2026-04-09 01:07:46.926769 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.926773 | orchestrator | 2026-04-09 01:07:46.926777 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-09 01:07:46.926781 | orchestrator | Thursday 09 April 2026 01:06:31 +0000 (0:00:03.527) 0:00:30.122 ******** 2026-04-09 01:07:46.926785 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.926788 | orchestrator | 2026-04-09 01:07:46.926792 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-09 01:07:46.926796 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:04.076) 0:00:34.198 ******** 2026-04-09 01:07:46.926800 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.926804 | orchestrator | 2026-04-09 01:07:46.926808 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-09 01:07:46.926812 | orchestrator | Thursday 09 April 2026 01:06:39 +0000 (0:00:03.258) 0:00:37.456 ******** 2026-04-09 01:07:46.926834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.926860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.926872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.926878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.926886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.926902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.926908 | orchestrator | 2026-04-09 01:07:46.926914 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-09 01:07:46.926920 | orchestrator | Thursday 09 April 2026 01:06:41 +0000 (0:00:02.043) 0:00:39.500 ******** 2026-04-09 01:07:46.926925 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:46.926931 | orchestrator | 2026-04-09 01:07:46.926936 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-09 01:07:46.926942 | orchestrator | Thursday 09 April 2026 01:06:41 +0000 (0:00:00.252) 0:00:39.752 ******** 2026-04-09 01:07:46.926948 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:46.926953 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:46.926959 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:46.926965 | orchestrator | 2026-04-09 01:07:46.926987 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-09 01:07:46.926993 | orchestrator | Thursday 09 April 2026 01:06:41 +0000 (0:00:00.484) 0:00:40.236 ******** 2026-04-09 01:07:46.926999 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:07:46.927005 | orchestrator | 2026-04-09 01:07:46.927014 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-09 01:07:46.927020 | orchestrator | Thursday 09 April 2026 01:06:42 +0000 (0:00:01.152) 0:00:41.388 ******** 2026-04-09 01:07:46.927027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927082 | orchestrator | 2026-04-09 01:07:46.927089 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-09 01:07:46.927099 | orchestrator | Thursday 09 April 2026 01:06:45 +0000 (0:00:02.860) 0:00:44.249 ******** 2026-04-09 01:07:46.927106 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:46.927112 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:46.927118 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:46.927124 | orchestrator | 2026-04-09 01:07:46.927130 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 01:07:46.927136 | orchestrator | Thursday 09 April 2026 01:06:46 +0000 (0:00:00.446) 0:00:44.695 ******** 2026-04-09 01:07:46.927142 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:46.927148 | orchestrator | 2026-04-09 01:07:46.927154 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-09 01:07:46.927160 | orchestrator | Thursday 09 April 2026 01:06:46 +0000 (0:00:00.466) 0:00:45.162 ******** 2026-04-09 01:07:46.927170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927219 | orchestrator | 2026-04-09 01:07:46.927226 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-09 01:07:46.927231 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:01.938) 0:00:47.100 ******** 2026-04-09 01:07:46.927240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927266 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:46.927273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927286 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:46.927297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927312 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:46.927319 | orchestrator | 2026-04-09 01:07:46.927329 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-09 01:07:46.927335 | orchestrator | Thursday 09 April 2026 01:06:49 +0000 (0:00:00.943) 0:00:48.044 ******** 2026-04-09 01:07:46.927342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927355 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:46.927365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927381 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:46.927387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927479 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:46.927485 | orchestrator | 2026-04-09 01:07:46.927491 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-09 01:07:46.927497 | orchestrator | Thursday 09 April 2026 01:06:50 +0000 (0:00:00.948) 0:00:48.992 ******** 2026-04-09 01:07:46.927507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 2026-04-09 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:46.927515 | orchestrator | 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927564 | orchestrator | 2026-04-09 01:07:46.927569 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-09 01:07:46.927575 | orchestrator | Thursday 09 April 2026 01:06:52 +0000 (0:00:02.143) 0:00:51.136 ******** 2026-04-09 01:07:46.927584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927656 | orchestrator | 2026-04-09 01:07:46.927662 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-09 01:07:46.927668 | orchestrator | Thursday 09 April 2026 01:06:58 +0000 (0:00:05.955) 0:00:57.092 ******** 2026-04-09 01:07:46.927675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927689 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:46.927697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927710 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:46.927715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927723 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:46.927727 | orchestrator | 2026-04-09 01:07:46.927730 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-09 01:07:46.927734 | orchestrator | Thursday 09 April 2026 01:06:59 +0000 (0:00:00.658) 0:00:57.751 ******** 2026-04-09 01:07:46.927741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:07:46.927760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:46.927781 | orchestrator | 2026-04-09 01:07:46.927785 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-09 01:07:46.927789 | orchestrator | Thursday 09 April 2026 01:07:01 +0000 (0:00:01.889) 0:00:59.640 ******** 2026-04-09 01:07:46.927793 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:07:46.927797 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:46.927801 | orchestrator | } 2026-04-09 01:07:46.927805 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:07:46.927809 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:46.927812 | orchestrator | } 2026-04-09 01:07:46.927816 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:07:46.927820 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:07:46.927824 | orchestrator | } 2026-04-09 01:07:46.927828 | orchestrator | 2026-04-09 01:07:46.927832 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:07:46.927836 | orchestrator | Thursday 09 April 2026 01:07:01 +0000 (0:00:00.365) 0:01:00.005 ******** 2026-04-09 01:07:46.927842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927850 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:46.927855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927868 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:46.927874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:07:46.927879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:46.927883 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:46.927887 | orchestrator | 2026-04-09 01:07:46.927890 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 01:07:46.927894 | orchestrator | Thursday 09 April 2026 01:07:02 +0000 (0:00:00.987) 0:01:00.992 ******** 2026-04-09 01:07:46.927898 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:46.927902 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:46.927906 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:46.927909 | orchestrator | 2026-04-09 01:07:46.927913 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-09 01:07:46.927917 | orchestrator | Thursday 09 April 2026 01:07:02 +0000 (0:00:00.242) 0:01:01.235 ******** 2026-04-09 01:07:46.927921 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.927925 | orchestrator | 2026-04-09 01:07:46.927928 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-09 01:07:46.927932 | orchestrator | Thursday 09 April 2026 01:07:05 +0000 (0:00:02.720) 0:01:03.955 ******** 2026-04-09 01:07:46.927936 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.927940 | orchestrator | 2026-04-09 01:07:46.927943 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-09 01:07:46.927947 | orchestrator | Thursday 09 April 2026 01:07:07 +0000 (0:00:02.242) 0:01:06.197 ******** 2026-04-09 01:07:46.927954 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.927958 | orchestrator | 2026-04-09 01:07:46.927961 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 01:07:46.927965 | orchestrator | Thursday 09 April 2026 01:07:21 +0000 (0:00:13.937) 0:01:20.135 ******** 2026-04-09 01:07:46.927981 | orchestrator | 2026-04-09 01:07:46.927988 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 01:07:46.927994 | orchestrator | Thursday 09 April 2026 01:07:21 +0000 (0:00:00.059) 0:01:20.195 ******** 2026-04-09 01:07:46.928000 | orchestrator | 2026-04-09 01:07:46.928006 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 01:07:46.928012 | orchestrator | Thursday 09 April 2026 01:07:21 +0000 (0:00:00.059) 0:01:20.255 ******** 2026-04-09 01:07:46.928018 | orchestrator | 2026-04-09 01:07:46.928025 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-09 01:07:46.928032 | orchestrator | Thursday 09 April 2026 01:07:21 +0000 (0:00:00.063) 0:01:20.318 ******** 2026-04-09 01:07:46.928038 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.928044 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:46.928050 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:46.928053 | orchestrator | 2026-04-09 01:07:46.928057 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-09 01:07:46.928064 | orchestrator | Thursday 09 April 2026 01:07:35 +0000 (0:00:13.993) 0:01:34.312 ******** 2026-04-09 01:07:46.928068 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:46.928072 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:46.928076 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:46.928079 | orchestrator | 2026-04-09 01:07:46.928083 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:07:46.928087 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:07:46.928092 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:07:46.928096 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:07:46.928100 | orchestrator | 2026-04-09 01:07:46.928103 | orchestrator | 2026-04-09 01:07:46.928107 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:07:46.928111 | orchestrator | Thursday 09 April 2026 01:07:45 +0000 (0:00:09.325) 0:01:43.637 ******** 2026-04-09 01:07:46.928115 | orchestrator | =============================================================================== 2026-04-09 01:07:46.928119 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.99s 2026-04-09 01:07:46.928122 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.94s 2026-04-09 01:07:46.928129 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.33s 2026-04-09 01:07:46.928133 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 6.54s 2026-04-09 01:07:46.928137 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.96s 2026-04-09 01:07:46.928140 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 4.77s 2026-04-09 01:07:46.928144 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.08s 2026-04-09 01:07:46.928148 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.78s 2026-04-09 01:07:46.928152 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.53s 2026-04-09 01:07:46.928155 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.45s 2026-04-09 01:07:46.928159 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.40s 2026-04-09 01:07:46.928166 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.32s 2026-04-09 01:07:46.928170 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.26s 2026-04-09 01:07:46.928174 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.86s 2026-04-09 01:07:46.928178 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.72s 2026-04-09 01:07:46.928181 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.24s 2026-04-09 01:07:46.928185 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.14s 2026-04-09 01:07:46.928189 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.04s 2026-04-09 01:07:46.928193 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 1.94s 2026-04-09 01:07:46.928197 | orchestrator | service-check-containers : magnum | Check containers -------------------- 1.89s 2026-04-09 01:07:49.972109 | orchestrator | 2026-04-09 01:07:49 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:49.972771 | orchestrator | 2026-04-09 01:07:49 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:49.972819 | orchestrator | 2026-04-09 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:53.037231 | orchestrator | 2026-04-09 01:07:53 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:53.037421 | orchestrator | 2026-04-09 01:07:53 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:53.037614 | orchestrator | 2026-04-09 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:56.083592 | orchestrator | 2026-04-09 01:07:56 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:56.083870 | orchestrator | 2026-04-09 01:07:56 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:56.084136 | orchestrator | 2026-04-09 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:59.121836 | orchestrator | 2026-04-09 01:07:59 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:07:59.125097 | orchestrator | 2026-04-09 01:07:59 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:07:59.125149 | orchestrator | 2026-04-09 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:02.165800 | orchestrator | 2026-04-09 01:08:02 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:02.166146 | orchestrator | 2026-04-09 01:08:02 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:02.166403 | orchestrator | 2026-04-09 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:05.218843 | orchestrator | 2026-04-09 01:08:05 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:05.220949 | orchestrator | 2026-04-09 01:08:05 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:05.220991 | orchestrator | 2026-04-09 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:08.263001 | orchestrator | 2026-04-09 01:08:08 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:08.264514 | orchestrator | 2026-04-09 01:08:08 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:08.264553 | orchestrator | 2026-04-09 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:11.319126 | orchestrator | 2026-04-09 01:08:11 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:11.319591 | orchestrator | 2026-04-09 01:08:11 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:11.319613 | orchestrator | 2026-04-09 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:14.356460 | orchestrator | 2026-04-09 01:08:14 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:14.357917 | orchestrator | 2026-04-09 01:08:14 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:14.358198 | orchestrator | 2026-04-09 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:17.391778 | orchestrator | 2026-04-09 01:08:17 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:17.393818 | orchestrator | 2026-04-09 01:08:17 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:17.393851 | orchestrator | 2026-04-09 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:20.437188 | orchestrator | 2026-04-09 01:08:20 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:20.438580 | orchestrator | 2026-04-09 01:08:20 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:20.440071 | orchestrator | 2026-04-09 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:23.476579 | orchestrator | 2026-04-09 01:08:23 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:23.478560 | orchestrator | 2026-04-09 01:08:23 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:23.478614 | orchestrator | 2026-04-09 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:26.523715 | orchestrator | 2026-04-09 01:08:26 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:26.524898 | orchestrator | 2026-04-09 01:08:26 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:26.525073 | orchestrator | 2026-04-09 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:29.567579 | orchestrator | 2026-04-09 01:08:29 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:29.568137 | orchestrator | 2026-04-09 01:08:29 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:29.568302 | orchestrator | 2026-04-09 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:32.610240 | orchestrator | 2026-04-09 01:08:32 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:32.610414 | orchestrator | 2026-04-09 01:08:32 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:32.610430 | orchestrator | 2026-04-09 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:35.665489 | orchestrator | 2026-04-09 01:08:35 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:35.668496 | orchestrator | 2026-04-09 01:08:35 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:35.668575 | orchestrator | 2026-04-09 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:38.706476 | orchestrator | 2026-04-09 01:08:38 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:38.708182 | orchestrator | 2026-04-09 01:08:38 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:38.708258 | orchestrator | 2026-04-09 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:41.751525 | orchestrator | 2026-04-09 01:08:41 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:41.752941 | orchestrator | 2026-04-09 01:08:41 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:41.752983 | orchestrator | 2026-04-09 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:44.804492 | orchestrator | 2026-04-09 01:08:44 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:44.807015 | orchestrator | 2026-04-09 01:08:44 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:44.807086 | orchestrator | 2026-04-09 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:47.852135 | orchestrator | 2026-04-09 01:08:47 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:47.854903 | orchestrator | 2026-04-09 01:08:47 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:47.854964 | orchestrator | 2026-04-09 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:50.891185 | orchestrator | 2026-04-09 01:08:50 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:50.891386 | orchestrator | 2026-04-09 01:08:50 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:50.891393 | orchestrator | 2026-04-09 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:53.941132 | orchestrator | 2026-04-09 01:08:53 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:53.943232 | orchestrator | 2026-04-09 01:08:53 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:53.943290 | orchestrator | 2026-04-09 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:56.989116 | orchestrator | 2026-04-09 01:08:56 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:08:56.991229 | orchestrator | 2026-04-09 01:08:56 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:08:56.991266 | orchestrator | 2026-04-09 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:00.046445 | orchestrator | 2026-04-09 01:09:00 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:00.048101 | orchestrator | 2026-04-09 01:09:00 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:00.048155 | orchestrator | 2026-04-09 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:03.090892 | orchestrator | 2026-04-09 01:09:03 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:03.092477 | orchestrator | 2026-04-09 01:09:03 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:03.092523 | orchestrator | 2026-04-09 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:06.137329 | orchestrator | 2026-04-09 01:09:06 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:06.138726 | orchestrator | 2026-04-09 01:09:06 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:06.138773 | orchestrator | 2026-04-09 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:09.189756 | orchestrator | 2026-04-09 01:09:09 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:09.191781 | orchestrator | 2026-04-09 01:09:09 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:09.191945 | orchestrator | 2026-04-09 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:12.232223 | orchestrator | 2026-04-09 01:09:12 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:12.233328 | orchestrator | 2026-04-09 01:09:12 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:12.233393 | orchestrator | 2026-04-09 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:15.270062 | orchestrator | 2026-04-09 01:09:15 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:15.270650 | orchestrator | 2026-04-09 01:09:15 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:15.271397 | orchestrator | 2026-04-09 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:18.318577 | orchestrator | 2026-04-09 01:09:18 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:18.320579 | orchestrator | 2026-04-09 01:09:18 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:18.320692 | orchestrator | 2026-04-09 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:21.363087 | orchestrator | 2026-04-09 01:09:21 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:21.365210 | orchestrator | 2026-04-09 01:09:21 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:21.365290 | orchestrator | 2026-04-09 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:24.409376 | orchestrator | 2026-04-09 01:09:24 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:24.410366 | orchestrator | 2026-04-09 01:09:24 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:24.410421 | orchestrator | 2026-04-09 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:27.450210 | orchestrator | 2026-04-09 01:09:27 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:27.451169 | orchestrator | 2026-04-09 01:09:27 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:27.451207 | orchestrator | 2026-04-09 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:30.487540 | orchestrator | 2026-04-09 01:09:30 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:30.491700 | orchestrator | 2026-04-09 01:09:30 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:30.492123 | orchestrator | 2026-04-09 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:33.531584 | orchestrator | 2026-04-09 01:09:33 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:33.531638 | orchestrator | 2026-04-09 01:09:33 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:33.531644 | orchestrator | 2026-04-09 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:36.563576 | orchestrator | 2026-04-09 01:09:36 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:36.565672 | orchestrator | 2026-04-09 01:09:36 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:36.565768 | orchestrator | 2026-04-09 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:39.611343 | orchestrator | 2026-04-09 01:09:39 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:39.613194 | orchestrator | 2026-04-09 01:09:39 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:39.613302 | orchestrator | 2026-04-09 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:42.661992 | orchestrator | 2026-04-09 01:09:42 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:42.663892 | orchestrator | 2026-04-09 01:09:42 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:42.663960 | orchestrator | 2026-04-09 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:45.710837 | orchestrator | 2026-04-09 01:09:45 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:45.711247 | orchestrator | 2026-04-09 01:09:45 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:45.711264 | orchestrator | 2026-04-09 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:48.757576 | orchestrator | 2026-04-09 01:09:48 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:48.759604 | orchestrator | 2026-04-09 01:09:48 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:48.759662 | orchestrator | 2026-04-09 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:51.796544 | orchestrator | 2026-04-09 01:09:51 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:51.797184 | orchestrator | 2026-04-09 01:09:51 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:51.797950 | orchestrator | 2026-04-09 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:54.831442 | orchestrator | 2026-04-09 01:09:54 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:54.832143 | orchestrator | 2026-04-09 01:09:54 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:54.832422 | orchestrator | 2026-04-09 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:57.885951 | orchestrator | 2026-04-09 01:09:57 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:09:57.886239 | orchestrator | 2026-04-09 01:09:57 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:09:57.886521 | orchestrator | 2026-04-09 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:00.913908 | orchestrator | 2026-04-09 01:10:00 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:00.914124 | orchestrator | 2026-04-09 01:10:00 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:00.914146 | orchestrator | 2026-04-09 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:03.953097 | orchestrator | 2026-04-09 01:10:03 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:03.953193 | orchestrator | 2026-04-09 01:10:03 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:03.953203 | orchestrator | 2026-04-09 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:06.988801 | orchestrator | 2026-04-09 01:10:06 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:06.989638 | orchestrator | 2026-04-09 01:10:06 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:06.989676 | orchestrator | 2026-04-09 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:10.046173 | orchestrator | 2026-04-09 01:10:10 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:10.048243 | orchestrator | 2026-04-09 01:10:10 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:10.048838 | orchestrator | 2026-04-09 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:13.092975 | orchestrator | 2026-04-09 01:10:13 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:13.093417 | orchestrator | 2026-04-09 01:10:13 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:13.093433 | orchestrator | 2026-04-09 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:16.134485 | orchestrator | 2026-04-09 01:10:16 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:16.138343 | orchestrator | 2026-04-09 01:10:16 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:16.138418 | orchestrator | 2026-04-09 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:19.170172 | orchestrator | 2026-04-09 01:10:19 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:19.170263 | orchestrator | 2026-04-09 01:10:19 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:19.170275 | orchestrator | 2026-04-09 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:22.201809 | orchestrator | 2026-04-09 01:10:22 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:22.202113 | orchestrator | 2026-04-09 01:10:22 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:22.202208 | orchestrator | 2026-04-09 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:25.245311 | orchestrator | 2026-04-09 01:10:25 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:25.246387 | orchestrator | 2026-04-09 01:10:25 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:25.246423 | orchestrator | 2026-04-09 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:28.311861 | orchestrator | 2026-04-09 01:10:28 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:28.311919 | orchestrator | 2026-04-09 01:10:28 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:28.311931 | orchestrator | 2026-04-09 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:31.337837 | orchestrator | 2026-04-09 01:10:31 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:31.338435 | orchestrator | 2026-04-09 01:10:31 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:31.338464 | orchestrator | 2026-04-09 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:34.378105 | orchestrator | 2026-04-09 01:10:34 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:34.378295 | orchestrator | 2026-04-09 01:10:34 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:34.378315 | orchestrator | 2026-04-09 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:37.430669 | orchestrator | 2026-04-09 01:10:37 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state STARTED 2026-04-09 01:10:37.432193 | orchestrator | 2026-04-09 01:10:37 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:37.432559 | orchestrator | 2026-04-09 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:40.486299 | orchestrator | 2026-04-09 01:10:40 | INFO  | Task 6b153133-a313-48ed-bb3d-29bae4eec023 is in state SUCCESS 2026-04-09 01:10:40.488096 | orchestrator | 2026-04-09 01:10:40.488171 | orchestrator | 2026-04-09 01:10:40.488180 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:10:40.488188 | orchestrator | 2026-04-09 01:10:40.488195 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-09 01:10:40.488219 | orchestrator | Thursday 09 April 2026 01:00:41 +0000 (0:00:00.235) 0:00:00.235 ******** 2026-04-09 01:10:40.488226 | orchestrator | changed: [testbed-manager] 2026-04-09 01:10:40.488232 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488238 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.488244 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.488250 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.488256 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.488262 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.488268 | orchestrator | 2026-04-09 01:10:40.488277 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:10:40.488283 | orchestrator | Thursday 09 April 2026 01:00:42 +0000 (0:00:00.586) 0:00:00.822 ******** 2026-04-09 01:10:40.488289 | orchestrator | changed: [testbed-manager] 2026-04-09 01:10:40.488295 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488301 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.488307 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.488313 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.488319 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.488326 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.488332 | orchestrator | 2026-04-09 01:10:40.488338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:10:40.488345 | orchestrator | Thursday 09 April 2026 01:00:42 +0000 (0:00:00.702) 0:00:01.524 ******** 2026-04-09 01:10:40.488352 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-09 01:10:40.488359 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-09 01:10:40.488367 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-09 01:10:40.488374 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-09 01:10:40.488379 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-09 01:10:40.488383 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-09 01:10:40.488387 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-09 01:10:40.488391 | orchestrator | 2026-04-09 01:10:40.488395 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-09 01:10:40.488398 | orchestrator | 2026-04-09 01:10:40.488402 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 01:10:40.488406 | orchestrator | Thursday 09 April 2026 01:00:43 +0000 (0:00:00.865) 0:00:02.390 ******** 2026-04-09 01:10:40.488410 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:10:40.488415 | orchestrator | 2026-04-09 01:10:40.488489 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-09 01:10:40.488495 | orchestrator | Thursday 09 April 2026 01:00:44 +0000 (0:00:00.664) 0:00:03.055 ******** 2026-04-09 01:10:40.488501 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-09 01:10:40.488508 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-09 01:10:40.488513 | orchestrator | 2026-04-09 01:10:40.488519 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-09 01:10:40.488525 | orchestrator | Thursday 09 April 2026 01:00:49 +0000 (0:00:05.283) 0:00:08.338 ******** 2026-04-09 01:10:40.488530 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 01:10:40.488537 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 01:10:40.488542 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488572 | orchestrator | 2026-04-09 01:10:40.488579 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 01:10:40.488586 | orchestrator | Thursday 09 April 2026 01:00:54 +0000 (0:00:05.081) 0:00:13.420 ******** 2026-04-09 01:10:40.488592 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488597 | orchestrator | 2026-04-09 01:10:40.488601 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-09 01:10:40.488605 | orchestrator | Thursday 09 April 2026 01:00:55 +0000 (0:00:00.660) 0:00:14.081 ******** 2026-04-09 01:10:40.488609 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488613 | orchestrator | 2026-04-09 01:10:40.488617 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-09 01:10:40.488621 | orchestrator | Thursday 09 April 2026 01:00:57 +0000 (0:00:01.582) 0:00:15.663 ******** 2026-04-09 01:10:40.488624 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488628 | orchestrator | 2026-04-09 01:10:40.488632 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:10:40.488636 | orchestrator | Thursday 09 April 2026 01:01:00 +0000 (0:00:03.234) 0:00:18.897 ******** 2026-04-09 01:10:40.488640 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.488644 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.488647 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.488651 | orchestrator | 2026-04-09 01:10:40.488655 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 01:10:40.488659 | orchestrator | Thursday 09 April 2026 01:01:00 +0000 (0:00:00.498) 0:00:19.396 ******** 2026-04-09 01:10:40.488663 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:10:40.488667 | orchestrator | 2026-04-09 01:10:40.488671 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-09 01:10:40.488675 | orchestrator | Thursday 09 April 2026 01:01:37 +0000 (0:00:36.487) 0:00:55.884 ******** 2026-04-09 01:10:40.488679 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488684 | orchestrator | 2026-04-09 01:10:40.488688 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 01:10:40.488693 | orchestrator | Thursday 09 April 2026 01:01:54 +0000 (0:00:17.219) 0:01:13.103 ******** 2026-04-09 01:10:40.488718 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:10:40.488724 | orchestrator | 2026-04-09 01:10:40.488729 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 01:10:40.488733 | orchestrator | Thursday 09 April 2026 01:02:07 +0000 (0:00:13.248) 0:01:26.351 ******** 2026-04-09 01:10:40.488753 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:10:40.488758 | orchestrator | 2026-04-09 01:10:40.488762 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-09 01:10:40.488773 | orchestrator | Thursday 09 April 2026 01:02:08 +0000 (0:00:00.796) 0:01:27.148 ******** 2026-04-09 01:10:40.488778 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.488783 | orchestrator | 2026-04-09 01:10:40.488789 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:10:40.488795 | orchestrator | Thursday 09 April 2026 01:02:09 +0000 (0:00:00.730) 0:01:27.878 ******** 2026-04-09 01:10:40.488800 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:10:40.488805 | orchestrator | 2026-04-09 01:10:40.488810 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 01:10:40.488814 | orchestrator | Thursday 09 April 2026 01:02:09 +0000 (0:00:00.635) 0:01:28.514 ******** 2026-04-09 01:10:40.488818 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:10:40.488822 | orchestrator | 2026-04-09 01:10:40.488827 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 01:10:40.488831 | orchestrator | Thursday 09 April 2026 01:02:30 +0000 (0:00:20.506) 0:01:49.020 ******** 2026-04-09 01:10:40.488836 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.488840 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.488850 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.488854 | orchestrator | 2026-04-09 01:10:40.488858 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-09 01:10:40.488936 | orchestrator | 2026-04-09 01:10:40.488942 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 01:10:40.488949 | orchestrator | Thursday 09 April 2026 01:02:30 +0000 (0:00:00.299) 0:01:49.320 ******** 2026-04-09 01:10:40.488954 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:10:40.488960 | orchestrator | 2026-04-09 01:10:40.488966 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-09 01:10:40.488972 | orchestrator | Thursday 09 April 2026 01:02:31 +0000 (0:00:00.645) 0:01:49.966 ******** 2026-04-09 01:10:40.488978 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.488984 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.488990 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.488996 | orchestrator | 2026-04-09 01:10:40.489003 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-09 01:10:40.489007 | orchestrator | Thursday 09 April 2026 01:02:34 +0000 (0:00:02.923) 0:01:52.890 ******** 2026-04-09 01:10:40.489012 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489066 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489070 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.489074 | orchestrator | 2026-04-09 01:10:40.489077 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 01:10:40.489081 | orchestrator | Thursday 09 April 2026 01:02:36 +0000 (0:00:02.225) 0:01:55.115 ******** 2026-04-09 01:10:40.489085 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.489089 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489093 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489097 | orchestrator | 2026-04-09 01:10:40.489101 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 01:10:40.489107 | orchestrator | Thursday 09 April 2026 01:02:37 +0000 (0:00:00.695) 0:01:55.811 ******** 2026-04-09 01:10:40.489113 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 01:10:40.489119 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489142 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 01:10:40.489148 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489154 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 01:10:40.489160 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-09 01:10:40.489165 | orchestrator | 2026-04-09 01:10:40.489171 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 01:10:40.489178 | orchestrator | Thursday 09 April 2026 01:02:49 +0000 (0:00:12.320) 0:02:08.131 ******** 2026-04-09 01:10:40.489183 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.489187 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489191 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489194 | orchestrator | 2026-04-09 01:10:40.489198 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 01:10:40.489202 | orchestrator | Thursday 09 April 2026 01:02:49 +0000 (0:00:00.270) 0:02:08.401 ******** 2026-04-09 01:10:40.489206 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 01:10:40.489210 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.489213 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 01:10:40.489217 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489221 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 01:10:40.489225 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489229 | orchestrator | 2026-04-09 01:10:40.489232 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 01:10:40.489237 | orchestrator | Thursday 09 April 2026 01:02:50 +0000 (0:00:01.159) 0:02:09.561 ******** 2026-04-09 01:10:40.489240 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489250 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489254 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.489258 | orchestrator | 2026-04-09 01:10:40.489262 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-09 01:10:40.489266 | orchestrator | Thursday 09 April 2026 01:02:51 +0000 (0:00:00.479) 0:02:10.040 ******** 2026-04-09 01:10:40.489269 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489273 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489277 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.489281 | orchestrator | 2026-04-09 01:10:40.489285 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-09 01:10:40.489288 | orchestrator | Thursday 09 April 2026 01:02:52 +0000 (0:00:01.006) 0:02:11.047 ******** 2026-04-09 01:10:40.489292 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489296 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489307 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.489311 | orchestrator | 2026-04-09 01:10:40.489315 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-09 01:10:40.489324 | orchestrator | Thursday 09 April 2026 01:02:55 +0000 (0:00:02.875) 0:02:13.922 ******** 2026-04-09 01:10:40.489327 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489331 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489335 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:10:40.489339 | orchestrator | 2026-04-09 01:10:40.489343 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 01:10:40.489347 | orchestrator | Thursday 09 April 2026 01:03:19 +0000 (0:00:23.841) 0:02:37.764 ******** 2026-04-09 01:10:40.489351 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489354 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489358 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:10:40.489362 | orchestrator | 2026-04-09 01:10:40.489367 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 01:10:40.489372 | orchestrator | Thursday 09 April 2026 01:03:33 +0000 (0:00:14.803) 0:02:52.568 ******** 2026-04-09 01:10:40.489378 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:10:40.489387 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489395 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489402 | orchestrator | 2026-04-09 01:10:40.489408 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-09 01:10:40.489414 | orchestrator | Thursday 09 April 2026 01:03:35 +0000 (0:00:01.430) 0:02:53.998 ******** 2026-04-09 01:10:40.489420 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489425 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489431 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.489437 | orchestrator | 2026-04-09 01:10:40.489442 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-09 01:10:40.489448 | orchestrator | Thursday 09 April 2026 01:03:49 +0000 (0:00:14.598) 0:03:08.597 ******** 2026-04-09 01:10:40.489455 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.489460 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489466 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489473 | orchestrator | 2026-04-09 01:10:40.489479 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 01:10:40.489484 | orchestrator | Thursday 09 April 2026 01:03:51 +0000 (0:00:01.652) 0:03:10.249 ******** 2026-04-09 01:10:40.489490 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.489496 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.489501 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.489507 | orchestrator | 2026-04-09 01:10:40.489513 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-09 01:10:40.489519 | orchestrator | 2026-04-09 01:10:40.489524 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:10:40.489530 | orchestrator | Thursday 09 April 2026 01:03:52 +0000 (0:00:00.553) 0:03:10.802 ******** 2026-04-09 01:10:40.489543 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:10:40.489550 | orchestrator | 2026-04-09 01:10:40.489556 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-09 01:10:40.489562 | orchestrator | Thursday 09 April 2026 01:03:53 +0000 (0:00:01.217) 0:03:12.020 ******** 2026-04-09 01:10:40.489568 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-09 01:10:40.489574 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-09 01:10:40.489579 | orchestrator | 2026-04-09 01:10:40.489585 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-09 01:10:40.489592 | orchestrator | Thursday 09 April 2026 01:03:56 +0000 (0:00:03.597) 0:03:15.618 ******** 2026-04-09 01:10:40.489598 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-09 01:10:40.489606 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-09 01:10:40.489612 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-09 01:10:40.489647 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-09 01:10:40.489654 | orchestrator | 2026-04-09 01:10:40.489661 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-09 01:10:40.489679 | orchestrator | Thursday 09 April 2026 01:04:04 +0000 (0:00:07.378) 0:03:22.997 ******** 2026-04-09 01:10:40.489685 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:10:40.489690 | orchestrator | 2026-04-09 01:10:40.489745 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-09 01:10:40.489752 | orchestrator | Thursday 09 April 2026 01:04:07 +0000 (0:00:03.552) 0:03:26.549 ******** 2026-04-09 01:10:40.489758 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-09 01:10:40.489764 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:10:40.489770 | orchestrator | 2026-04-09 01:10:40.489776 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-09 01:10:40.489781 | orchestrator | Thursday 09 April 2026 01:04:12 +0000 (0:00:04.205) 0:03:30.755 ******** 2026-04-09 01:10:40.489786 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:10:40.489792 | orchestrator | 2026-04-09 01:10:40.489798 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-09 01:10:40.489804 | orchestrator | Thursday 09 April 2026 01:04:15 +0000 (0:00:03.742) 0:03:34.497 ******** 2026-04-09 01:10:40.489831 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-09 01:10:40.489837 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-09 01:10:40.489843 | orchestrator | 2026-04-09 01:10:40.489849 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 01:10:40.489865 | orchestrator | Thursday 09 April 2026 01:04:24 +0000 (0:00:08.230) 0:03:42.728 ******** 2026-04-09 01:10:40.489885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.489905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.489913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.489991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.490061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.490087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.490093 | orchestrator | 2026-04-09 01:10:40.490106 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-09 01:10:40.490114 | orchestrator | Thursday 09 April 2026 01:04:26 +0000 (0:00:02.118) 0:03:44.846 ******** 2026-04-09 01:10:40.490120 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.490128 | orchestrator | 2026-04-09 01:10:40.490137 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-09 01:10:40.490141 | orchestrator | Thursday 09 April 2026 01:04:26 +0000 (0:00:00.106) 0:03:44.952 ******** 2026-04-09 01:10:40.490145 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.490148 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.490153 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.490161 | orchestrator | 2026-04-09 01:10:40.490165 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-09 01:10:40.490169 | orchestrator | Thursday 09 April 2026 01:04:26 +0000 (0:00:00.277) 0:03:45.230 ******** 2026-04-09 01:10:40.490172 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:10:40.490176 | orchestrator | 2026-04-09 01:10:40.490180 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-09 01:10:40.490184 | orchestrator | Thursday 09 April 2026 01:04:27 +0000 (0:00:00.734) 0:03:45.965 ******** 2026-04-09 01:10:40.490188 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.490192 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.490195 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.490201 | orchestrator | 2026-04-09 01:10:40.490207 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:10:40.490212 | orchestrator | Thursday 09 April 2026 01:04:27 +0000 (0:00:00.295) 0:03:46.260 ******** 2026-04-09 01:10:40.490275 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:10:40.490284 | orchestrator | 2026-04-09 01:10:40.490290 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 01:10:40.490304 | orchestrator | Thursday 09 April 2026 01:04:28 +0000 (0:00:00.626) 0:03:46.886 ******** 2026-04-09 01:10:40.490311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.490374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.490385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.490391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.490397 | orchestrator | 2026-04-09 01:10:40.490404 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 01:10:40.490409 | orchestrator | Thursday 09 April 2026 01:04:32 +0000 (0:00:04.145) 0:03:51.032 ******** 2026-04-09 01:10:40.490416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.490424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.490430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.490442 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.490891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.490974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.490987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.490995 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.491003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.491062 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.491069 | orchestrator | 2026-04-09 01:10:40.491076 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 01:10:40.491086 | orchestrator | Thursday 09 April 2026 01:04:33 +0000 (0:00:00.663) 0:03:51.695 ******** 2026-04-09 01:10:40.491093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.491125 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.491132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.491150 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.491156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.491191 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.491197 | orchestrator | 2026-04-09 01:10:40.491202 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-09 01:10:40.491208 | orchestrator | Thursday 09 April 2026 01:04:34 +0000 (0:00:01.077) 0:03:52.773 ******** 2026-04-09 01:10:40.491213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491302 | orchestrator | 2026-04-09 01:10:40.491309 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-09 01:10:40.491315 | orchestrator | Thursday 09 April 2026 01:04:37 +0000 (0:00:03.090) 0:03:55.864 ******** 2026-04-09 01:10:40.491322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491416 | orchestrator | 2026-04-09 01:10:40.491422 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-09 01:10:40.491428 | orchestrator | Thursday 09 April 2026 01:04:46 +0000 (0:00:09.483) 0:04:05.347 ******** 2026-04-09 01:10:40.491435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.491468 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.491475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.491500 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.491510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.491527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.491534 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.491545 | orchestrator | 2026-04-09 01:10:40.491552 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-09 01:10:40.491558 | orchestrator | Thursday 09 April 2026 01:04:48 +0000 (0:00:01.791) 0:04:07.139 ******** 2026-04-09 01:10:40.491565 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.491571 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.491577 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.491583 | orchestrator | 2026-04-09 01:10:40.491589 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-09 01:10:40.491595 | orchestrator | Thursday 09 April 2026 01:04:49 +0000 (0:00:01.390) 0:04:08.529 ******** 2026-04-09 01:10:40.491601 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.491607 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.491614 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.491620 | orchestrator | 2026-04-09 01:10:40.491626 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-09 01:10:40.491632 | orchestrator | Thursday 09 April 2026 01:04:50 +0000 (0:00:00.942) 0:04:09.472 ******** 2026-04-09 01:10:40.491640 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-09 01:10:40.491647 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-09 01:10:40.491653 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.491659 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-09 01:10:40.491665 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-09 01:10:40.491672 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.491678 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-09 01:10:40.491684 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-09 01:10:40.491690 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.491725 | orchestrator | 2026-04-09 01:10:40.491733 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-09 01:10:40.491739 | orchestrator | Thursday 09 April 2026 01:04:51 +0000 (0:00:00.446) 0:04:09.918 ******** 2026-04-09 01:10:40.491746 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-09 01:10:40.491755 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-09 01:10:40.491761 | orchestrator | 2026-04-09 01:10:40.491767 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-09 01:10:40.491773 | orchestrator | Thursday 09 April 2026 01:04:53 +0000 (0:00:02.587) 0:04:12.506 ******** 2026-04-09 01:10:40.491780 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.491786 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.491792 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.491799 | orchestrator | 2026-04-09 01:10:40.491805 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-09 01:10:40.491811 | orchestrator | Thursday 09 April 2026 01:04:56 +0000 (0:00:02.252) 0:04:14.758 ******** 2026-04-09 01:10:40.491817 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.491823 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.491829 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.491836 | orchestrator | 2026-04-09 01:10:40.491842 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-09 01:10:40.491848 | orchestrator | Thursday 09 April 2026 01:04:58 +0000 (0:00:01.913) 0:04:16.672 ******** 2026-04-09 01:10:40.491867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 01:10:40.491932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.491956 | orchestrator | 2026-04-09 01:10:40.491965 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-09 01:10:40.491975 | orchestrator | Thursday 09 April 2026 01:05:00 +0000 (0:00:02.623) 0:04:19.296 ******** 2026-04-09 01:10:40.491981 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:10:40.491992 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.491998 | orchestrator | } 2026-04-09 01:10:40.492004 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:10:40.492014 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.492020 | orchestrator | } 2026-04-09 01:10:40.492026 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:10:40.492033 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.492039 | orchestrator | } 2026-04-09 01:10:40.492046 | orchestrator | 2026-04-09 01:10:40.492052 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:10:40.492059 | orchestrator | Thursday 09 April 2026 01:05:00 +0000 (0:00:00.251) 0:04:19.547 ******** 2026-04-09 01:10:40.492066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.492075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.492083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.492103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.492116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.492123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.492129 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.492135 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.492141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.492147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 01:10:40.492171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.492178 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.492184 | orchestrator | 2026-04-09 01:10:40.492190 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 01:10:40.492195 | orchestrator | Thursday 09 April 2026 01:05:01 +0000 (0:00:00.822) 0:04:20.370 ******** 2026-04-09 01:10:40.492201 | orchestrator | 2026-04-09 01:10:40.492209 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 01:10:40.492215 | orchestrator | Thursday 09 April 2026 01:05:01 +0000 (0:00:00.100) 0:04:20.471 ******** 2026-04-09 01:10:40.492221 | orchestrator | 2026-04-09 01:10:40.492226 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 01:10:40.492232 | orchestrator | Thursday 09 April 2026 01:05:01 +0000 (0:00:00.099) 0:04:20.570 ******** 2026-04-09 01:10:40.492239 | orchestrator | 2026-04-09 01:10:40.492245 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-09 01:10:40.492252 | orchestrator | Thursday 09 April 2026 01:05:02 +0000 (0:00:00.098) 0:04:20.669 ******** 2026-04-09 01:10:40.492258 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.492265 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.492271 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.492278 | orchestrator | 2026-04-09 01:10:40.492285 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-09 01:10:40.492292 | orchestrator | Thursday 09 April 2026 01:05:23 +0000 (0:00:21.338) 0:04:42.007 ******** 2026-04-09 01:10:40.492298 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.492303 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.492309 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.492315 | orchestrator | 2026-04-09 01:10:40.492320 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-09 01:10:40.492327 | orchestrator | Thursday 09 April 2026 01:05:33 +0000 (0:00:10.036) 0:04:52.044 ******** 2026-04-09 01:10:40.492332 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.492338 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.492345 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.492352 | orchestrator | 2026-04-09 01:10:40.492358 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-09 01:10:40.492365 | orchestrator | 2026-04-09 01:10:40.492371 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:10:40.492376 | orchestrator | Thursday 09 April 2026 01:05:43 +0000 (0:00:09.954) 0:05:01.998 ******** 2026-04-09 01:10:40.492382 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:10:40.492388 | orchestrator | 2026-04-09 01:10:40.492394 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:10:40.492400 | orchestrator | Thursday 09 April 2026 01:05:44 +0000 (0:00:00.982) 0:05:02.981 ******** 2026-04-09 01:10:40.492406 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.492412 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.492426 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.492432 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.492438 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.492445 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.492452 | orchestrator | 2026-04-09 01:10:40.492459 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-09 01:10:40.492465 | orchestrator | Thursday 09 April 2026 01:05:44 +0000 (0:00:00.485) 0:05:03.466 ******** 2026-04-09 01:10:40.492471 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.492477 | orchestrator | 2026-04-09 01:10:40.492483 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-09 01:10:40.492490 | orchestrator | Thursday 09 April 2026 01:06:09 +0000 (0:00:24.432) 0:05:27.898 ******** 2026-04-09 01:10:40.492497 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:10:40.492505 | orchestrator | 2026-04-09 01:10:40.492511 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-09 01:10:40.492518 | orchestrator | Thursday 09 April 2026 01:06:10 +0000 (0:00:01.080) 0:05:28.978 ******** 2026-04-09 01:10:40.492525 | orchestrator | included: service-image-info for testbed-node-3 2026-04-09 01:10:40.492531 | orchestrator | 2026-04-09 01:10:40.492538 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-09 01:10:40.492544 | orchestrator | Thursday 09 April 2026 01:06:10 +0000 (0:00:00.672) 0:05:29.650 ******** 2026-04-09 01:10:40.492550 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:10:40.492556 | orchestrator | 2026-04-09 01:10:40.492562 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-09 01:10:40.492567 | orchestrator | Thursday 09 April 2026 01:06:13 +0000 (0:00:02.355) 0:05:32.006 ******** 2026-04-09 01:10:40.492573 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:10:40.492579 | orchestrator | 2026-04-09 01:10:40.492585 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-09 01:10:40.492590 | orchestrator | Thursday 09 April 2026 01:06:14 +0000 (0:00:01.178) 0:05:33.184 ******** 2026-04-09 01:10:40.492596 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.492603 | orchestrator | 2026-04-09 01:10:40.492609 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-09 01:10:40.492616 | orchestrator | Thursday 09 April 2026 01:06:15 +0000 (0:00:01.148) 0:05:34.333 ******** 2026-04-09 01:10:40.492622 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.492628 | orchestrator | 2026-04-09 01:10:40.492635 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-09 01:10:40.492653 | orchestrator | Thursday 09 April 2026 01:06:16 +0000 (0:00:01.232) 0:05:35.566 ******** 2026-04-09 01:10:40.492662 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.492669 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.492675 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.492687 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:10:40.492830 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:10:40.492853 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:10:40.492859 | orchestrator | 2026-04-09 01:10:40.492867 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-09 01:10:40.492875 | orchestrator | Thursday 09 April 2026 01:06:20 +0000 (0:00:03.799) 0:05:39.365 ******** 2026-04-09 01:10:40.492882 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.492889 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.492896 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.492903 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.492911 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.492917 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.492923 | orchestrator | 2026-04-09 01:10:40.492931 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-09 01:10:40.492941 | orchestrator | Thursday 09 April 2026 01:06:22 +0000 (0:00:02.215) 0:05:41.580 ******** 2026-04-09 01:10:40.492948 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.492970 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.492976 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.492985 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.493044 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.493054 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.493062 | orchestrator | 2026-04-09 01:10:40.493069 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-09 01:10:40.493077 | orchestrator | Thursday 09 April 2026 01:06:25 +0000 (0:00:02.331) 0:05:43.912 ******** 2026-04-09 01:10:40.493083 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.493091 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.493098 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.493106 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:10:40.493115 | orchestrator | 2026-04-09 01:10:40.493122 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 01:10:40.493129 | orchestrator | Thursday 09 April 2026 01:06:25 +0000 (0:00:00.610) 0:05:44.523 ******** 2026-04-09 01:10:40.493137 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-09 01:10:40.493146 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-09 01:10:40.493155 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-09 01:10:40.493163 | orchestrator | 2026-04-09 01:10:40.493170 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 01:10:40.493178 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:00.853) 0:05:45.376 ******** 2026-04-09 01:10:40.493186 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-09 01:10:40.493195 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-09 01:10:40.493204 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-09 01:10:40.493210 | orchestrator | 2026-04-09 01:10:40.493219 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 01:10:40.493227 | orchestrator | Thursday 09 April 2026 01:06:28 +0000 (0:00:01.349) 0:05:46.725 ******** 2026-04-09 01:10:40.493235 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-09 01:10:40.493242 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.493250 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-09 01:10:40.493258 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.493266 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-09 01:10:40.493274 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.493282 | orchestrator | 2026-04-09 01:10:40.493289 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-09 01:10:40.493297 | orchestrator | Thursday 09 April 2026 01:06:28 +0000 (0:00:00.482) 0:05:47.208 ******** 2026-04-09 01:10:40.493304 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 01:10:40.493310 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 01:10:40.493316 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.493327 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 01:10:40.493333 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 01:10:40.493340 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 01:10:40.493346 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 01:10:40.493353 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.493359 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 01:10:40.493365 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 01:10:40.493371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.493379 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 01:10:40.493397 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 01:10:40.493405 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 01:10:40.493412 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 01:10:40.493420 | orchestrator | 2026-04-09 01:10:40.493426 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-09 01:10:40.493432 | orchestrator | Thursday 09 April 2026 01:06:29 +0000 (0:00:01.170) 0:05:48.378 ******** 2026-04-09 01:10:40.493439 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.493445 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.493452 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.493895 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.493924 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.493932 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.493938 | orchestrator | 2026-04-09 01:10:40.493957 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-09 01:10:40.493967 | orchestrator | Thursday 09 April 2026 01:06:30 +0000 (0:00:01.065) 0:05:49.444 ******** 2026-04-09 01:10:40.493973 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.493979 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.493984 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.493990 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.493995 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.494001 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.494007 | orchestrator | 2026-04-09 01:10:40.494041 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 01:10:40.494050 | orchestrator | Thursday 09 April 2026 01:06:32 +0000 (0:00:01.757) 0:05:51.201 ******** 2026-04-09 01:10:40.494059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494210 | orchestrator | 2026-04-09 01:10:40.494217 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:10:40.494224 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:02.498) 0:05:53.700 ******** 2026-04-09 01:10:40.494232 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:10:40.494238 | orchestrator | 2026-04-09 01:10:40.494243 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 01:10:40.494249 | orchestrator | Thursday 09 April 2026 01:06:36 +0000 (0:00:01.190) 0:05:54.891 ******** 2026-04-09 01:10:40.494259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.494389 | orchestrator | 2026-04-09 01:10:40.494394 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 01:10:40.494400 | orchestrator | Thursday 09 April 2026 01:06:39 +0000 (0:00:03.196) 0:05:58.087 ******** 2026-04-09 01:10:40.494406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.494417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.494423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.494440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.494447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.494453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494463 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.494470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.494476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494481 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.494488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.494503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.494510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494517 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.494528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494535 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.494541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494547 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.494554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.494561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494567 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.494574 | orchestrator | 2026-04-09 01:10:40.494580 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 01:10:40.494587 | orchestrator | Thursday 09 April 2026 01:06:42 +0000 (0:00:02.638) 0:06:00.726 ******** 2026-04-09 01:10:40.494603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.494609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.494622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494629 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.494636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.494643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.494658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494665 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.494671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.494682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.494689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494743 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.494753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.494759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.494776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494786 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.494792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494806 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.494812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.494819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.494825 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.494831 | orchestrator | 2026-04-09 01:10:40.494838 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:10:40.494844 | orchestrator | Thursday 09 April 2026 01:06:45 +0000 (0:00:02.999) 0:06:03.725 ******** 2026-04-09 01:10:40.494850 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.494856 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.494862 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.494869 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:10:40.494875 | orchestrator | 2026-04-09 01:10:40.494881 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-09 01:10:40.494887 | orchestrator | Thursday 09 April 2026 01:06:45 +0000 (0:00:00.866) 0:06:04.592 ******** 2026-04-09 01:10:40.494893 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:10:40.494900 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:10:40.494905 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:10:40.494912 | orchestrator | 2026-04-09 01:10:40.494918 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-09 01:10:40.494924 | orchestrator | Thursday 09 April 2026 01:06:46 +0000 (0:00:00.915) 0:06:05.507 ******** 2026-04-09 01:10:40.494930 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:10:40.494936 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:10:40.494942 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:10:40.494947 | orchestrator | 2026-04-09 01:10:40.494953 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-09 01:10:40.494959 | orchestrator | Thursday 09 April 2026 01:06:47 +0000 (0:00:00.948) 0:06:06.456 ******** 2026-04-09 01:10:40.494964 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:10:40.494972 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:10:40.494978 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:10:40.494983 | orchestrator | 2026-04-09 01:10:40.494990 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-09 01:10:40.494996 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:00.464) 0:06:06.921 ******** 2026-04-09 01:10:40.495007 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:10:40.495013 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:10:40.495019 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:10:40.495024 | orchestrator | 2026-04-09 01:10:40.495030 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-09 01:10:40.495036 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:00.477) 0:06:07.399 ******** 2026-04-09 01:10:40.495042 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 01:10:40.495048 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 01:10:40.495054 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 01:10:40.495059 | orchestrator | 2026-04-09 01:10:40.495070 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-09 01:10:40.495076 | orchestrator | Thursday 09 April 2026 01:06:49 +0000 (0:00:01.204) 0:06:08.603 ******** 2026-04-09 01:10:40.495087 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 01:10:40.495093 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 01:10:40.495099 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 01:10:40.495104 | orchestrator | 2026-04-09 01:10:40.495110 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-09 01:10:40.495115 | orchestrator | Thursday 09 April 2026 01:06:51 +0000 (0:00:01.332) 0:06:09.936 ******** 2026-04-09 01:10:40.495121 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 01:10:40.495127 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 01:10:40.495133 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 01:10:40.495142 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-09 01:10:40.495151 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-09 01:10:40.495158 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-09 01:10:40.495164 | orchestrator | 2026-04-09 01:10:40.495170 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-09 01:10:40.495176 | orchestrator | Thursday 09 April 2026 01:06:55 +0000 (0:00:03.949) 0:06:13.885 ******** 2026-04-09 01:10:40.495182 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.495187 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.495193 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.495199 | orchestrator | 2026-04-09 01:10:40.495204 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-09 01:10:40.495210 | orchestrator | Thursday 09 April 2026 01:06:55 +0000 (0:00:00.444) 0:06:14.330 ******** 2026-04-09 01:10:40.495216 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.495221 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.495227 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.495232 | orchestrator | 2026-04-09 01:10:40.495238 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-09 01:10:40.495244 | orchestrator | Thursday 09 April 2026 01:06:56 +0000 (0:00:00.366) 0:06:14.696 ******** 2026-04-09 01:10:40.495250 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.495257 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.495264 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.495270 | orchestrator | 2026-04-09 01:10:40.495277 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-09 01:10:40.495283 | orchestrator | Thursday 09 April 2026 01:06:57 +0000 (0:00:01.901) 0:06:16.597 ******** 2026-04-09 01:10:40.495291 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 01:10:40.495299 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 01:10:40.495305 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-09 01:10:40.495319 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 01:10:40.495332 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 01:10:40.495339 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-09 01:10:40.495346 | orchestrator | 2026-04-09 01:10:40.495352 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-09 01:10:40.495358 | orchestrator | Thursday 09 April 2026 01:07:01 +0000 (0:00:03.186) 0:06:19.784 ******** 2026-04-09 01:10:40.495364 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 01:10:40.495401 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 01:10:40.495407 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 01:10:40.495414 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 01:10:40.495420 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.495425 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 01:10:40.495431 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.495437 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 01:10:40.495443 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.495449 | orchestrator | 2026-04-09 01:10:40.495455 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-09 01:10:40.495461 | orchestrator | Thursday 09 April 2026 01:07:04 +0000 (0:00:02.926) 0:06:22.711 ******** 2026-04-09 01:10:40.495467 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.495473 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.495479 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.495485 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-5, testbed-node-3 2026-04-09 01:10:40.495492 | orchestrator | 2026-04-09 01:10:40.495507 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-09 01:10:40.495513 | orchestrator | Thursday 09 April 2026 01:07:05 +0000 (0:00:01.788) 0:06:24.499 ******** 2026-04-09 01:10:40.495519 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:10:40.495534 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:10:40.495541 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:10:40.495546 | orchestrator | 2026-04-09 01:10:40.495553 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-09 01:10:40.495559 | orchestrator | Thursday 09 April 2026 01:07:06 +0000 (0:00:00.949) 0:06:25.448 ******** 2026-04-09 01:10:40.495566 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.495572 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.495578 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.495583 | orchestrator | 2026-04-09 01:10:40.495590 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-09 01:10:40.495596 | orchestrator | Thursday 09 April 2026 01:07:07 +0000 (0:00:00.257) 0:06:25.706 ******** 2026-04-09 01:10:40.495602 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.495608 | orchestrator | 2026-04-09 01:10:40.495614 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-09 01:10:40.495620 | orchestrator | Thursday 09 April 2026 01:07:07 +0000 (0:00:00.109) 0:06:25.815 ******** 2026-04-09 01:10:40.495627 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.495633 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.495639 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.495652 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.495659 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.495664 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.495671 | orchestrator | 2026-04-09 01:10:40.495677 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-09 01:10:40.495683 | orchestrator | Thursday 09 April 2026 01:07:07 +0000 (0:00:00.777) 0:06:26.592 ******** 2026-04-09 01:10:40.495689 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:10:40.495713 | orchestrator | 2026-04-09 01:10:40.495722 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-09 01:10:40.495728 | orchestrator | Thursday 09 April 2026 01:07:08 +0000 (0:00:01.058) 0:06:27.651 ******** 2026-04-09 01:10:40.495735 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.495741 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.495748 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.495754 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.495760 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.495766 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.495772 | orchestrator | 2026-04-09 01:10:40.495779 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-09 01:10:40.495785 | orchestrator | Thursday 09 April 2026 01:07:09 +0000 (0:00:00.641) 0:06:28.293 ******** 2026-04-09 01:10:40.495794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495866 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.495935 | orchestrator | 2026-04-09 01:10:40.495942 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-09 01:10:40.495959 | orchestrator | Thursday 09 April 2026 01:07:13 +0000 (0:00:03.722) 0:06:32.015 ******** 2026-04-09 01:10:40.495970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.495978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.495985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.495991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.495998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.496243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.496279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.496405 | orchestrator | 2026-04-09 01:10:40.496412 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-09 01:10:40.496418 | orchestrator | Thursday 09 April 2026 01:07:18 +0000 (0:00:05.584) 0:06:37.600 ******** 2026-04-09 01:10:40.496424 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.496431 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.496437 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.496443 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.496449 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.496455 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.496461 | orchestrator | 2026-04-09 01:10:40.496468 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-09 01:10:40.496474 | orchestrator | Thursday 09 April 2026 01:07:20 +0000 (0:00:01.182) 0:06:38.783 ******** 2026-04-09 01:10:40.496480 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 01:10:40.496487 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 01:10:40.496493 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 01:10:40.496500 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 01:10:40.496513 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.496519 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 01:10:40.496526 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 01:10:40.496532 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.496538 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 01:10:40.496544 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.496551 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 01:10:40.496557 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 01:10:40.496563 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 01:10:40.496570 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 01:10:40.496576 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 01:10:40.496582 | orchestrator | 2026-04-09 01:10:40.496588 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-09 01:10:40.496599 | orchestrator | Thursday 09 April 2026 01:07:24 +0000 (0:00:03.940) 0:06:42.723 ******** 2026-04-09 01:10:40.496606 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.496612 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.496623 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.496629 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.496635 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.496641 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.496647 | orchestrator | 2026-04-09 01:10:40.496653 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-09 01:10:40.496659 | orchestrator | Thursday 09 April 2026 01:07:24 +0000 (0:00:00.619) 0:06:43.343 ******** 2026-04-09 01:10:40.496665 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 01:10:40.496671 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 01:10:40.496678 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 01:10:40.496684 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 01:10:40.496690 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 01:10:40.496712 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 01:10:40.496720 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 01:10:40.496726 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 01:10:40.496732 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 01:10:40.496738 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 01:10:40.496748 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.496755 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 01:10:40.496761 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.496767 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 01:10:40.496773 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.496786 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:10:40.496792 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:10:40.496798 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:10:40.496803 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:10:40.496808 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:10:40.496814 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:10:40.496820 | orchestrator | 2026-04-09 01:10:40.496826 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-09 01:10:40.496832 | orchestrator | Thursday 09 April 2026 01:07:29 +0000 (0:00:04.568) 0:06:47.911 ******** 2026-04-09 01:10:40.496838 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 01:10:40.496843 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 01:10:40.496849 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 01:10:40.496856 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 01:10:40.496863 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 01:10:40.496869 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 01:10:40.496874 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:10:40.496880 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:10:40.496886 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 01:10:40.496891 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:10:40.496898 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 01:10:40.496903 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 01:10:40.496915 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 01:10:40.496921 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.496927 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 01:10:40.496942 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 01:10:40.496948 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.496955 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.496962 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:10:40.496969 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:10:40.496976 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:10:40.496983 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:10:40.496990 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:10:40.496996 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:10:40.497003 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:10:40.497009 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:10:40.497021 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:10:40.497028 | orchestrator | 2026-04-09 01:10:40.497033 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-09 01:10:40.497039 | orchestrator | Thursday 09 April 2026 01:07:35 +0000 (0:00:06.140) 0:06:54.051 ******** 2026-04-09 01:10:40.497046 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.497052 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.497058 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.497064 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.497070 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.497076 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.497081 | orchestrator | 2026-04-09 01:10:40.497087 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-09 01:10:40.497093 | orchestrator | Thursday 09 April 2026 01:07:35 +0000 (0:00:00.516) 0:06:54.568 ******** 2026-04-09 01:10:40.497100 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.497106 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.497112 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.497118 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.497124 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.497132 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.497138 | orchestrator | 2026-04-09 01:10:40.497144 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-09 01:10:40.497151 | orchestrator | Thursday 09 April 2026 01:07:36 +0000 (0:00:00.768) 0:06:55.336 ******** 2026-04-09 01:10:40.497157 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.497164 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.497171 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.497179 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.497186 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.497192 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.497198 | orchestrator | 2026-04-09 01:10:40.497204 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-09 01:10:40.497210 | orchestrator | Thursday 09 April 2026 01:07:39 +0000 (0:00:02.526) 0:06:57.863 ******** 2026-04-09 01:10:40.497217 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.497224 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.497231 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.497236 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.497242 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.497248 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.497254 | orchestrator | 2026-04-09 01:10:40.497260 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-09 01:10:40.497267 | orchestrator | Thursday 09 April 2026 01:07:41 +0000 (0:00:02.370) 0:07:00.233 ******** 2026-04-09 01:10:40.497275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.497299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.497314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.497321 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.497328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.497334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.497340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.497347 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.497359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.497377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.497386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.497393 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.497401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.497408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.497415 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.497422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.497430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.497443 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.497459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.497468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.497475 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.497483 | orchestrator | 2026-04-09 01:10:40.497489 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-09 01:10:40.497495 | orchestrator | Thursday 09 April 2026 01:07:42 +0000 (0:00:01.333) 0:07:01.567 ******** 2026-04-09 01:10:40.497502 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 01:10:40.497509 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 01:10:40.497515 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.497522 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 01:10:40.497528 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 01:10:40.497534 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.497540 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 01:10:40.497546 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 01:10:40.497553 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.497559 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 01:10:40.497565 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 01:10:40.497572 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.497578 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 01:10:40.497584 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 01:10:40.497590 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.497596 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 01:10:40.497603 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 01:10:40.497610 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.497616 | orchestrator | 2026-04-09 01:10:40.497622 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-09 01:10:40.497628 | orchestrator | Thursday 09 April 2026 01:07:43 +0000 (0:00:00.807) 0:07:02.374 ******** 2026-04-09 01:10:40.497635 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497754 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:10:40.497829 | orchestrator | 2026-04-09 01:10:40.497835 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-09 01:10:40.497842 | orchestrator | Thursday 09 April 2026 01:07:46 +0000 (0:00:02.871) 0:07:05.246 ******** 2026-04-09 01:10:40.497848 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 01:10:40.497854 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.497861 | orchestrator | } 2026-04-09 01:10:40.497868 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 01:10:40.497874 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.497880 | orchestrator | } 2026-04-09 01:10:40.497886 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 01:10:40.497892 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.497898 | orchestrator | } 2026-04-09 01:10:40.497904 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:10:40.497909 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.497915 | orchestrator | } 2026-04-09 01:10:40.497921 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:10:40.497927 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.497933 | orchestrator | } 2026-04-09 01:10:40.497938 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:10:40.497944 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:10:40.497950 | orchestrator | } 2026-04-09 01:10:40.497955 | orchestrator | 2026-04-09 01:10:40.497961 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:10:40.497967 | orchestrator | Thursday 09 April 2026 01:07:47 +0000 (0:00:00.866) 0:07:06.112 ******** 2026-04-09 01:10:40.497974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.497992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.497999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.498005 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.498084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.498094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.498101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:10:40.498113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.498119 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.498126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:10:40.498132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.498138 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.498152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.498159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.498170 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.498176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.498182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.498188 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.498194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:10:40.498200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:10:40.498206 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.498212 | orchestrator | 2026-04-09 01:10:40.498218 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:10:40.498224 | orchestrator | Thursday 09 April 2026 01:07:49 +0000 (0:00:02.092) 0:07:08.205 ******** 2026-04-09 01:10:40.498230 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.498236 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.498242 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.498248 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.498258 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.498264 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.498271 | orchestrator | 2026-04-09 01:10:40.498277 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:10:40.498287 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:00.573) 0:07:08.778 ******** 2026-04-09 01:10:40.498293 | orchestrator | 2026-04-09 01:10:40.498300 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:10:40.498306 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:00.128) 0:07:08.907 ******** 2026-04-09 01:10:40.498312 | orchestrator | 2026-04-09 01:10:40.498318 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:10:40.498330 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:00.143) 0:07:09.050 ******** 2026-04-09 01:10:40.498336 | orchestrator | 2026-04-09 01:10:40.498343 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:10:40.498349 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:00.292) 0:07:09.342 ******** 2026-04-09 01:10:40.498355 | orchestrator | 2026-04-09 01:10:40.498362 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:10:40.498368 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:00.132) 0:07:09.475 ******** 2026-04-09 01:10:40.498374 | orchestrator | 2026-04-09 01:10:40.498380 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:10:40.498386 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:00.127) 0:07:09.602 ******** 2026-04-09 01:10:40.498392 | orchestrator | 2026-04-09 01:10:40.498398 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-09 01:10:40.498404 | orchestrator | Thursday 09 April 2026 01:07:51 +0000 (0:00:00.133) 0:07:09.736 ******** 2026-04-09 01:10:40.498410 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.498416 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.498423 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.498429 | orchestrator | 2026-04-09 01:10:40.498435 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-09 01:10:40.498442 | orchestrator | Thursday 09 April 2026 01:07:58 +0000 (0:00:07.703) 0:07:17.439 ******** 2026-04-09 01:10:40.498448 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.498454 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.498460 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.498467 | orchestrator | 2026-04-09 01:10:40.498473 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-09 01:10:40.498480 | orchestrator | Thursday 09 April 2026 01:08:10 +0000 (0:00:11.476) 0:07:28.916 ******** 2026-04-09 01:10:40.498489 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.498499 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.498506 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.498512 | orchestrator | 2026-04-09 01:10:40.498518 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-09 01:10:40.498524 | orchestrator | Thursday 09 April 2026 01:08:30 +0000 (0:00:20.285) 0:07:49.202 ******** 2026-04-09 01:10:40.498531 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.498537 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.498544 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.498549 | orchestrator | 2026-04-09 01:10:40.498556 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-09 01:10:40.498562 | orchestrator | Thursday 09 April 2026 01:09:02 +0000 (0:00:31.836) 0:08:21.039 ******** 2026-04-09 01:10:40.498568 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.498574 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.498581 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-09 01:10:40.498589 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.498596 | orchestrator | 2026-04-09 01:10:40.498603 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-09 01:10:40.498610 | orchestrator | Thursday 09 April 2026 01:09:08 +0000 (0:00:06.171) 0:08:27.210 ******** 2026-04-09 01:10:40.498617 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.498623 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.498629 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.498636 | orchestrator | 2026-04-09 01:10:40.498643 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-09 01:10:40.498651 | orchestrator | Thursday 09 April 2026 01:09:09 +0000 (0:00:00.947) 0:08:28.158 ******** 2026-04-09 01:10:40.498657 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:10:40.498664 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:10:40.498677 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:10:40.498683 | orchestrator | 2026-04-09 01:10:40.498690 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-09 01:10:40.498718 | orchestrator | Thursday 09 April 2026 01:09:32 +0000 (0:00:23.183) 0:08:51.341 ******** 2026-04-09 01:10:40.498725 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.498731 | orchestrator | 2026-04-09 01:10:40.498737 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-09 01:10:40.498743 | orchestrator | Thursday 09 April 2026 01:09:32 +0000 (0:00:00.118) 0:08:51.460 ******** 2026-04-09 01:10:40.498749 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.498755 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.498761 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.498767 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.498772 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.498778 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-09 01:10:40.498788 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:10:40.498794 | orchestrator | 2026-04-09 01:10:40.498801 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-09 01:10:40.498806 | orchestrator | Thursday 09 April 2026 01:09:52 +0000 (0:00:19.726) 0:09:11.186 ******** 2026-04-09 01:10:40.498812 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.498818 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.498824 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.498841 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.498847 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.498853 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.498859 | orchestrator | 2026-04-09 01:10:40.498872 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-09 01:10:40.498878 | orchestrator | Thursday 09 April 2026 01:09:59 +0000 (0:00:07.341) 0:09:18.528 ******** 2026-04-09 01:10:40.498883 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.498889 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.498895 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.498901 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.498907 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.498912 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-09 01:10:40.498918 | orchestrator | 2026-04-09 01:10:40.498924 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 01:10:40.498930 | orchestrator | Thursday 09 April 2026 01:10:01 +0000 (0:00:02.103) 0:09:20.632 ******** 2026-04-09 01:10:40.498936 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:10:40.498942 | orchestrator | 2026-04-09 01:10:40.498948 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 01:10:40.498953 | orchestrator | Thursday 09 April 2026 01:10:16 +0000 (0:00:14.701) 0:09:35.333 ******** 2026-04-09 01:10:40.498959 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:10:40.498965 | orchestrator | 2026-04-09 01:10:40.498971 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-09 01:10:40.498977 | orchestrator | Thursday 09 April 2026 01:10:17 +0000 (0:00:00.811) 0:09:36.144 ******** 2026-04-09 01:10:40.498983 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.498989 | orchestrator | 2026-04-09 01:10:40.498995 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-09 01:10:40.499001 | orchestrator | Thursday 09 April 2026 01:10:18 +0000 (0:00:00.804) 0:09:36.949 ******** 2026-04-09 01:10:40.499007 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:10:40.499012 | orchestrator | 2026-04-09 01:10:40.499018 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-09 01:10:40.499031 | orchestrator | 2026-04-09 01:10:40.499037 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-09 01:10:40.499043 | orchestrator | Thursday 09 April 2026 01:10:32 +0000 (0:00:14.147) 0:09:51.096 ******** 2026-04-09 01:10:40.499049 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:10:40.499055 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:10:40.499061 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:10:40.499067 | orchestrator | 2026-04-09 01:10:40.499073 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-09 01:10:40.499079 | orchestrator | 2026-04-09 01:10:40.499085 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-09 01:10:40.499090 | orchestrator | Thursday 09 April 2026 01:10:33 +0000 (0:00:01.182) 0:09:52.279 ******** 2026-04-09 01:10:40.499096 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.499102 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.499108 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.499113 | orchestrator | 2026-04-09 01:10:40.499119 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-09 01:10:40.499124 | orchestrator | 2026-04-09 01:10:40.499130 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-09 01:10:40.499135 | orchestrator | Thursday 09 April 2026 01:10:34 +0000 (0:00:00.485) 0:09:52.765 ******** 2026-04-09 01:10:40.499141 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-09 01:10:40.499147 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 01:10:40.499153 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 01:10:40.499159 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-09 01:10:40.499164 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-09 01:10:40.499170 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-09 01:10:40.499175 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-09 01:10:40.499180 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 01:10:40.499186 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 01:10:40.499192 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-09 01:10:40.499197 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-09 01:10:40.499202 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-09 01:10:40.499209 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:10:40.499215 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-09 01:10:40.499221 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 01:10:40.499226 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 01:10:40.499232 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-09 01:10:40.499238 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-09 01:10:40.499243 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-09 01:10:40.499249 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:10:40.499254 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-09 01:10:40.499260 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 01:10:40.499266 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 01:10:40.499272 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-09 01:10:40.499278 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-09 01:10:40.499284 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-09 01:10:40.499296 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:10:40.499302 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-09 01:10:40.499308 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 01:10:40.499323 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 01:10:40.499329 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-09 01:10:40.499335 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-09 01:10:40.499341 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.499347 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-09 01:10:40.499353 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.499359 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-09 01:10:40.499365 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 01:10:40.499370 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 01:10:40.499376 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-09 01:10:40.499382 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-09 01:10:40.499388 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-09 01:10:40.499394 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.499399 | orchestrator | 2026-04-09 01:10:40.499405 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-09 01:10:40.499411 | orchestrator | 2026-04-09 01:10:40.499417 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-09 01:10:40.499423 | orchestrator | Thursday 09 April 2026 01:10:35 +0000 (0:00:01.293) 0:09:54.059 ******** 2026-04-09 01:10:40.499428 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-09 01:10:40.499434 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-09 01:10:40.499441 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.499447 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-09 01:10:40.499453 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-09 01:10:40.499458 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.499464 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-09 01:10:40.499470 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-09 01:10:40.499476 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.499482 | orchestrator | 2026-04-09 01:10:40.499488 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-09 01:10:40.499493 | orchestrator | 2026-04-09 01:10:40.499499 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-09 01:10:40.499505 | orchestrator | Thursday 09 April 2026 01:10:36 +0000 (0:00:00.708) 0:09:54.767 ******** 2026-04-09 01:10:40.499511 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.499517 | orchestrator | 2026-04-09 01:10:40.499523 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-09 01:10:40.499529 | orchestrator | 2026-04-09 01:10:40.499535 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-09 01:10:40.499540 | orchestrator | Thursday 09 April 2026 01:10:36 +0000 (0:00:00.771) 0:09:55.539 ******** 2026-04-09 01:10:40.499546 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:10:40.499552 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:10:40.499558 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:10:40.499564 | orchestrator | 2026-04-09 01:10:40.499570 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:10:40.499576 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:10:40.499584 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-09 01:10:40.499590 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-04-09 01:10:40.499596 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-04-09 01:10:40.499659 | orchestrator | testbed-node-3 : ok=47  changed=30  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2026-04-09 01:10:40.499666 | orchestrator | testbed-node-4 : ok=46  changed=29  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-09 01:10:40.499672 | orchestrator | testbed-node-5 : ok=41  changed=29  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-04-09 01:10:40.499678 | orchestrator | 2026-04-09 01:10:40.499683 | orchestrator | 2026-04-09 01:10:40.499689 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:10:40.499712 | orchestrator | Thursday 09 April 2026 01:10:37 +0000 (0:00:00.402) 0:09:55.941 ******** 2026-04-09 01:10:40.499720 | orchestrator | =============================================================================== 2026-04-09 01:10:40.499726 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 36.49s 2026-04-09 01:10:40.499734 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.84s 2026-04-09 01:10:40.499740 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 24.43s 2026-04-09 01:10:40.499746 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.84s 2026-04-09 01:10:40.499756 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.18s 2026-04-09 01:10:40.499762 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.34s 2026-04-09 01:10:40.499772 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.51s 2026-04-09 01:10:40.499778 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.29s 2026-04-09 01:10:40.499783 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 19.73s 2026-04-09 01:10:40.499792 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.22s 2026-04-09 01:10:40.499801 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.80s 2026-04-09 01:10:40.499810 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.70s 2026-04-09 01:10:40.499819 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.60s 2026-04-09 01:10:40.499829 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 14.15s 2026-04-09 01:10:40.499839 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.25s 2026-04-09 01:10:40.499848 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.32s 2026-04-09 01:10:40.499858 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.48s 2026-04-09 01:10:40.499866 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.04s 2026-04-09 01:10:40.499876 | orchestrator | nova : Restart nova-metadata container ---------------------------------- 9.95s 2026-04-09 01:10:40.499886 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.48s 2026-04-09 01:10:40.499894 | orchestrator | 2026-04-09 01:10:40 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:40.499903 | orchestrator | 2026-04-09 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:43.524493 | orchestrator | 2026-04-09 01:10:43 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:43.524553 | orchestrator | 2026-04-09 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:46.562755 | orchestrator | 2026-04-09 01:10:46 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:46.562811 | orchestrator | 2026-04-09 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:49.603335 | orchestrator | 2026-04-09 01:10:49 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:49.603397 | orchestrator | 2026-04-09 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:52.634587 | orchestrator | 2026-04-09 01:10:52 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:52.634659 | orchestrator | 2026-04-09 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:55.682410 | orchestrator | 2026-04-09 01:10:55 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:55.682481 | orchestrator | 2026-04-09 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:58.728580 | orchestrator | 2026-04-09 01:10:58 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:10:58.728862 | orchestrator | 2026-04-09 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:01.759573 | orchestrator | 2026-04-09 01:11:01 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:01.759626 | orchestrator | 2026-04-09 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:04.802197 | orchestrator | 2026-04-09 01:11:04 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:04.802242 | orchestrator | 2026-04-09 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:07.845799 | orchestrator | 2026-04-09 01:11:07 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:07.845855 | orchestrator | 2026-04-09 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:10.889024 | orchestrator | 2026-04-09 01:11:10 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:10.889094 | orchestrator | 2026-04-09 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:13.931627 | orchestrator | 2026-04-09 01:11:13 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:13.931709 | orchestrator | 2026-04-09 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:16.980487 | orchestrator | 2026-04-09 01:11:16 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:16.980551 | orchestrator | 2026-04-09 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:20.025626 | orchestrator | 2026-04-09 01:11:20 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:20.025691 | orchestrator | 2026-04-09 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:23.067232 | orchestrator | 2026-04-09 01:11:23 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:23.067292 | orchestrator | 2026-04-09 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:26.111944 | orchestrator | 2026-04-09 01:11:26 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:26.111999 | orchestrator | 2026-04-09 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:29.148144 | orchestrator | 2026-04-09 01:11:29 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:29.148197 | orchestrator | 2026-04-09 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:32.193796 | orchestrator | 2026-04-09 01:11:32 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:32.193909 | orchestrator | 2026-04-09 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:35.237877 | orchestrator | 2026-04-09 01:11:35 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:35.237991 | orchestrator | 2026-04-09 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:38.278428 | orchestrator | 2026-04-09 01:11:38 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:38.278485 | orchestrator | 2026-04-09 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:41.321343 | orchestrator | 2026-04-09 01:11:41 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:41.321401 | orchestrator | 2026-04-09 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:44.361215 | orchestrator | 2026-04-09 01:11:44 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:44.361286 | orchestrator | 2026-04-09 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:47.409679 | orchestrator | 2026-04-09 01:11:47 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:47.409777 | orchestrator | 2026-04-09 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:50.458417 | orchestrator | 2026-04-09 01:11:50 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:50.459389 | orchestrator | 2026-04-09 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:53.499200 | orchestrator | 2026-04-09 01:11:53 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:53.499295 | orchestrator | 2026-04-09 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:56.543303 | orchestrator | 2026-04-09 01:11:56 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:56.543398 | orchestrator | 2026-04-09 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:59.587099 | orchestrator | 2026-04-09 01:11:59 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:11:59.587169 | orchestrator | 2026-04-09 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:02.622351 | orchestrator | 2026-04-09 01:12:02 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:12:02.622401 | orchestrator | 2026-04-09 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:05.668412 | orchestrator | 2026-04-09 01:12:05 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:12:05.668668 | orchestrator | 2026-04-09 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:08.705250 | orchestrator | 2026-04-09 01:12:08 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:12:08.705328 | orchestrator | 2026-04-09 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:11.759428 | orchestrator | 2026-04-09 01:12:11 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state STARTED 2026-04-09 01:12:11.759526 | orchestrator | 2026-04-09 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:14.804855 | orchestrator | 2026-04-09 01:12:14 | INFO  | Task 2e36c71c-6dd6-412d-b13e-7fbeacb8e17d is in state SUCCESS 2026-04-09 01:12:14.806107 | orchestrator | 2026-04-09 01:12:14.806131 | orchestrator | 2026-04-09 01:12:14.806137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:12:14.806143 | orchestrator | 2026-04-09 01:12:14.806148 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:12:14.806154 | orchestrator | Thursday 09 April 2026 01:07:13 +0000 (0:00:00.359) 0:00:00.359 ******** 2026-04-09 01:12:14.806175 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806181 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:12:14.806195 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:12:14.806200 | orchestrator | 2026-04-09 01:12:14.806204 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:12:14.806208 | orchestrator | Thursday 09 April 2026 01:07:14 +0000 (0:00:00.562) 0:00:00.921 ******** 2026-04-09 01:12:14.806212 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-09 01:12:14.806216 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-09 01:12:14.806220 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-09 01:12:14.806224 | orchestrator | 2026-04-09 01:12:14.806228 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-09 01:12:14.806232 | orchestrator | 2026-04-09 01:12:14.806236 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:12:14.806239 | orchestrator | Thursday 09 April 2026 01:07:15 +0000 (0:00:00.581) 0:00:01.502 ******** 2026-04-09 01:12:14.806243 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:12:14.806248 | orchestrator | 2026-04-09 01:12:14.806252 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-04-09 01:12:14.806256 | orchestrator | Thursday 09 April 2026 01:07:15 +0000 (0:00:00.942) 0:00:02.444 ******** 2026-04-09 01:12:14.806261 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-09 01:12:14.806264 | orchestrator | 2026-04-09 01:12:14.806268 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-04-09 01:12:14.806272 | orchestrator | Thursday 09 April 2026 01:07:19 +0000 (0:00:03.841) 0:00:06.286 ******** 2026-04-09 01:12:14.806276 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-09 01:12:14.806281 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-09 01:12:14.806285 | orchestrator | 2026-04-09 01:12:14.806288 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-09 01:12:14.806292 | orchestrator | Thursday 09 April 2026 01:07:27 +0000 (0:00:08.073) 0:00:14.360 ******** 2026-04-09 01:12:14.806296 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:12:14.806301 | orchestrator | 2026-04-09 01:12:14.806305 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-09 01:12:14.806309 | orchestrator | Thursday 09 April 2026 01:07:31 +0000 (0:00:03.587) 0:00:17.948 ******** 2026-04-09 01:12:14.806313 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-09 01:12:14.806317 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-09 01:12:14.806321 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:12:14.806325 | orchestrator | 2026-04-09 01:12:14.806329 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-09 01:12:14.806333 | orchestrator | Thursday 09 April 2026 01:07:39 +0000 (0:00:08.108) 0:00:26.056 ******** 2026-04-09 01:12:14.806337 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:12:14.806341 | orchestrator | 2026-04-09 01:12:14.806345 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-04-09 01:12:14.806349 | orchestrator | Thursday 09 April 2026 01:07:42 +0000 (0:00:03.167) 0:00:29.224 ******** 2026-04-09 01:12:14.806353 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-09 01:12:14.806356 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-09 01:12:14.806360 | orchestrator | 2026-04-09 01:12:14.806364 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-09 01:12:14.806368 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:07.367) 0:00:36.591 ******** 2026-04-09 01:12:14.806375 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-09 01:12:14.806379 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-09 01:12:14.806383 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-09 01:12:14.806387 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-09 01:12:14.806391 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-09 01:12:14.806394 | orchestrator | 2026-04-09 01:12:14.806398 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:12:14.806402 | orchestrator | Thursday 09 April 2026 01:08:06 +0000 (0:00:16.750) 0:00:53.342 ******** 2026-04-09 01:12:14.806406 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:12:14.806410 | orchestrator | 2026-04-09 01:12:14.806414 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-09 01:12:14.806417 | orchestrator | Thursday 09 April 2026 01:08:07 +0000 (0:00:00.710) 0:00:54.053 ******** 2026-04-09 01:12:14.806421 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806425 | orchestrator | 2026-04-09 01:12:14.806429 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-09 01:12:14.806433 | orchestrator | Thursday 09 April 2026 01:08:13 +0000 (0:00:05.611) 0:00:59.664 ******** 2026-04-09 01:12:14.806437 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806440 | orchestrator | 2026-04-09 01:12:14.806444 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-09 01:12:14.806455 | orchestrator | Thursday 09 April 2026 01:08:18 +0000 (0:00:05.170) 0:01:04.835 ******** 2026-04-09 01:12:14.806459 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806462 | orchestrator | 2026-04-09 01:12:14.806466 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-09 01:12:14.806470 | orchestrator | Thursday 09 April 2026 01:08:21 +0000 (0:00:03.256) 0:01:08.092 ******** 2026-04-09 01:12:14.806474 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-09 01:12:14.806481 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-09 01:12:14.806484 | orchestrator | 2026-04-09 01:12:14.806488 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-09 01:12:14.806492 | orchestrator | Thursday 09 April 2026 01:08:31 +0000 (0:00:10.029) 0:01:18.122 ******** 2026-04-09 01:12:14.806496 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-09 01:12:14.806500 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-09 01:12:14.806505 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-09 01:12:14.806509 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-09 01:12:14.806513 | orchestrator | 2026-04-09 01:12:14.806517 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-09 01:12:14.806521 | orchestrator | Thursday 09 April 2026 01:08:49 +0000 (0:00:18.034) 0:01:36.156 ******** 2026-04-09 01:12:14.806525 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806529 | orchestrator | 2026-04-09 01:12:14.806532 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-09 01:12:14.806536 | orchestrator | Thursday 09 April 2026 01:08:55 +0000 (0:00:05.581) 0:01:41.738 ******** 2026-04-09 01:12:14.806540 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806544 | orchestrator | 2026-04-09 01:12:14.806548 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-09 01:12:14.806551 | orchestrator | Thursday 09 April 2026 01:09:00 +0000 (0:00:04.889) 0:01:46.627 ******** 2026-04-09 01:12:14.806570 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.806574 | orchestrator | 2026-04-09 01:12:14.806578 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-09 01:12:14.806582 | orchestrator | Thursday 09 April 2026 01:09:00 +0000 (0:00:00.551) 0:01:47.179 ******** 2026-04-09 01:12:14.806586 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806589 | orchestrator | 2026-04-09 01:12:14.806593 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:12:14.806597 | orchestrator | Thursday 09 April 2026 01:09:04 +0000 (0:00:03.623) 0:01:50.803 ******** 2026-04-09 01:12:14.806601 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:12:14.806605 | orchestrator | 2026-04-09 01:12:14.806609 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-09 01:12:14.806612 | orchestrator | Thursday 09 April 2026 01:09:05 +0000 (0:00:00.811) 0:01:51.615 ******** 2026-04-09 01:12:14.806616 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.806620 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806624 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.806628 | orchestrator | 2026-04-09 01:12:14.806632 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-09 01:12:14.806635 | orchestrator | Thursday 09 April 2026 01:09:11 +0000 (0:00:06.369) 0:01:57.985 ******** 2026-04-09 01:12:14.806639 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.806643 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.806647 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806651 | orchestrator | 2026-04-09 01:12:14.806655 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-09 01:12:14.806658 | orchestrator | Thursday 09 April 2026 01:09:16 +0000 (0:00:04.727) 0:02:02.712 ******** 2026-04-09 01:12:14.806662 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806666 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.806670 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.806674 | orchestrator | 2026-04-09 01:12:14.806678 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-09 01:12:14.806681 | orchestrator | Thursday 09 April 2026 01:09:17 +0000 (0:00:00.899) 0:02:03.612 ******** 2026-04-09 01:12:14.806685 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:12:14.806689 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806693 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:12:14.806697 | orchestrator | 2026-04-09 01:12:14.806701 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-09 01:12:14.806704 | orchestrator | Thursday 09 April 2026 01:09:18 +0000 (0:00:01.656) 0:02:05.269 ******** 2026-04-09 01:12:14.806708 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.806712 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.806716 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806720 | orchestrator | 2026-04-09 01:12:14.806723 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-09 01:12:14.806727 | orchestrator | Thursday 09 April 2026 01:09:20 +0000 (0:00:01.179) 0:02:06.449 ******** 2026-04-09 01:12:14.806731 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806735 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.806739 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.806742 | orchestrator | 2026-04-09 01:12:14.806746 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-09 01:12:14.806750 | orchestrator | Thursday 09 April 2026 01:09:21 +0000 (0:00:01.269) 0:02:07.718 ******** 2026-04-09 01:12:14.806754 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.806758 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.806762 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806765 | orchestrator | 2026-04-09 01:12:14.806772 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-09 01:12:14.806776 | orchestrator | Thursday 09 April 2026 01:09:24 +0000 (0:00:02.741) 0:02:10.460 ******** 2026-04-09 01:12:14.806785 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.806789 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.806793 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.806797 | orchestrator | 2026-04-09 01:12:14.806804 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-09 01:12:14.806808 | orchestrator | Thursday 09 April 2026 01:09:25 +0000 (0:00:01.771) 0:02:12.232 ******** 2026-04-09 01:12:14.806812 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806815 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:12:14.806819 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:12:14.806823 | orchestrator | 2026-04-09 01:12:14.806827 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-09 01:12:14.806831 | orchestrator | Thursday 09 April 2026 01:09:26 +0000 (0:00:00.592) 0:02:12.824 ******** 2026-04-09 01:12:14.806835 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:12:14.806839 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:12:14.806842 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806846 | orchestrator | 2026-04-09 01:12:14.806850 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:12:14.806854 | orchestrator | Thursday 09 April 2026 01:09:30 +0000 (0:00:03.672) 0:02:16.497 ******** 2026-04-09 01:12:14.806858 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:12:14.806862 | orchestrator | 2026-04-09 01:12:14.806865 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-09 01:12:14.806869 | orchestrator | Thursday 09 April 2026 01:09:30 +0000 (0:00:00.578) 0:02:17.075 ******** 2026-04-09 01:12:14.806873 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806877 | orchestrator | 2026-04-09 01:12:14.806881 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-09 01:12:14.806884 | orchestrator | Thursday 09 April 2026 01:09:34 +0000 (0:00:03.995) 0:02:21.071 ******** 2026-04-09 01:12:14.806888 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806892 | orchestrator | 2026-04-09 01:12:14.806896 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-09 01:12:14.806900 | orchestrator | Thursday 09 April 2026 01:09:38 +0000 (0:00:03.657) 0:02:24.728 ******** 2026-04-09 01:12:14.806904 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-09 01:12:14.806908 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-09 01:12:14.806912 | orchestrator | 2026-04-09 01:12:14.806915 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-09 01:12:14.806919 | orchestrator | Thursday 09 April 2026 01:09:46 +0000 (0:00:07.980) 0:02:32.708 ******** 2026-04-09 01:12:14.806923 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806927 | orchestrator | 2026-04-09 01:12:14.806931 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-09 01:12:14.806934 | orchestrator | Thursday 09 April 2026 01:09:50 +0000 (0:00:04.069) 0:02:36.777 ******** 2026-04-09 01:12:14.806938 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:12:14.806942 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:12:14.806946 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:12:14.806950 | orchestrator | 2026-04-09 01:12:14.806953 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-09 01:12:14.806957 | orchestrator | Thursday 09 April 2026 01:09:50 +0000 (0:00:00.316) 0:02:37.093 ******** 2026-04-09 01:12:14.806964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.806978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.806986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.806991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.806996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807058 | orchestrator | 2026-04-09 01:12:14.807062 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-09 01:12:14.807066 | orchestrator | Thursday 09 April 2026 01:09:53 +0000 (0:00:03.045) 0:02:40.139 ******** 2026-04-09 01:12:14.807070 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.807074 | orchestrator | 2026-04-09 01:12:14.807080 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-09 01:12:14.807084 | orchestrator | Thursday 09 April 2026 01:09:53 +0000 (0:00:00.232) 0:02:40.372 ******** 2026-04-09 01:12:14.807088 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.807091 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:12:14.807095 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:12:14.807099 | orchestrator | 2026-04-09 01:12:14.807103 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-09 01:12:14.807107 | orchestrator | Thursday 09 April 2026 01:09:54 +0000 (0:00:00.595) 0:02:40.968 ******** 2026-04-09 01:12:14.807111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807134 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.807147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807172 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:12:14.807179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807206 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:12:14.807210 | orchestrator | 2026-04-09 01:12:14.807214 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:12:14.807218 | orchestrator | Thursday 09 April 2026 01:09:55 +0000 (0:00:01.214) 0:02:42.183 ******** 2026-04-09 01:12:14.807222 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:12:14.807226 | orchestrator | 2026-04-09 01:12:14.807229 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-09 01:12:14.807233 | orchestrator | Thursday 09 April 2026 01:09:57 +0000 (0:00:01.462) 0:02:43.646 ******** 2026-04-09 01:12:14.807237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807322 | orchestrator | 2026-04-09 01:12:14.807326 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-09 01:12:14.807333 | orchestrator | Thursday 09 April 2026 01:10:03 +0000 (0:00:05.979) 0:02:49.625 ******** 2026-04-09 01:12:14.807337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807359 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.807366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807389 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:12:14.807398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807423 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:12:14.807427 | orchestrator | 2026-04-09 01:12:14.807430 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-09 01:12:14.807434 | orchestrator | Thursday 09 April 2026 01:10:03 +0000 (0:00:00.615) 0:02:50.241 ******** 2026-04-09 01:12:14.807438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807469 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.807474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807503 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:12:14.807507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.807511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.807515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.807535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.807539 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:12:14.807543 | orchestrator | 2026-04-09 01:12:14.807547 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-09 01:12:14.807551 | orchestrator | Thursday 09 April 2026 01:10:04 +0000 (0:00:00.850) 0:02:51.091 ******** 2026-04-09 01:12:14.807555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807647 | orchestrator | 2026-04-09 01:12:14.807651 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-09 01:12:14.807655 | orchestrator | Thursday 09 April 2026 01:10:10 +0000 (0:00:05.375) 0:02:56.467 ******** 2026-04-09 01:12:14.807659 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 01:12:14.807663 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 01:12:14.807670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 01:12:14.807674 | orchestrator | 2026-04-09 01:12:14.807678 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-09 01:12:14.807682 | orchestrator | Thursday 09 April 2026 01:10:11 +0000 (0:00:01.747) 0:02:58.215 ******** 2026-04-09 01:12:14.807691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.807725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.807769 | orchestrator | 2026-04-09 01:12:14.807773 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-09 01:12:14.807777 | orchestrator | Thursday 09 April 2026 01:10:27 +0000 (0:00:16.212) 0:03:14.427 ******** 2026-04-09 01:12:14.807781 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.807785 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.807788 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.807792 | orchestrator | 2026-04-09 01:12:14.807796 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-09 01:12:14.807800 | orchestrator | Thursday 09 April 2026 01:10:30 +0000 (0:00:02.117) 0:03:16.545 ******** 2026-04-09 01:12:14.807804 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807807 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807811 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807815 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807819 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807823 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807826 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807835 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807839 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807843 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807847 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807851 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807855 | orchestrator | 2026-04-09 01:12:14.807858 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-09 01:12:14.807862 | orchestrator | Thursday 09 April 2026 01:10:35 +0000 (0:00:05.384) 0:03:21.930 ******** 2026-04-09 01:12:14.807866 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807870 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807874 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807878 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807882 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807885 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807889 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807893 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807897 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807901 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807905 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807908 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807912 | orchestrator | 2026-04-09 01:12:14.807916 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-09 01:12:14.807920 | orchestrator | Thursday 09 April 2026 01:10:41 +0000 (0:00:05.858) 0:03:27.788 ******** 2026-04-09 01:12:14.807924 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807928 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807931 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 01:12:14.807935 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807939 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807943 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 01:12:14.807947 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807950 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807957 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 01:12:14.807961 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807964 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807968 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 01:12:14.807972 | orchestrator | 2026-04-09 01:12:14.807978 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-09 01:12:14.807982 | orchestrator | Thursday 09 April 2026 01:10:46 +0000 (0:00:04.941) 0:03:32.730 ******** 2026-04-09 01:12:14.807986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.807998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:12:14.808002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.808008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.808014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:12:14.808018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:12:14.808065 | orchestrator | 2026-04-09 01:12:14.808070 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-09 01:12:14.808073 | orchestrator | Thursday 09 April 2026 01:10:49 +0000 (0:00:03.488) 0:03:36.218 ******** 2026-04-09 01:12:14.808077 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 01:12:14.808081 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:12:14.808085 | orchestrator | } 2026-04-09 01:12:14.808089 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 01:12:14.808093 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:12:14.808097 | orchestrator | } 2026-04-09 01:12:14.808101 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 01:12:14.808105 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 01:12:14.808108 | orchestrator | } 2026-04-09 01:12:14.808112 | orchestrator | 2026-04-09 01:12:14.808116 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 01:12:14.808120 | orchestrator | Thursday 09 April 2026 01:10:50 +0000 (0:00:00.503) 0:03:36.722 ******** 2026-04-09 01:12:14.808124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.808132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.808142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.808146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.808150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.808154 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.808158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.808162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.808169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.808179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.808183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.808187 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:12:14.808191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:12:14.808195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:12:14.808199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.808207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:12:14.808216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:12:14.808220 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:12:14.808224 | orchestrator | 2026-04-09 01:12:14.808228 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:12:14.808232 | orchestrator | Thursday 09 April 2026 01:10:51 +0000 (0:00:00.838) 0:03:37.561 ******** 2026-04-09 01:12:14.808236 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:12:14.808240 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:12:14.808243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:12:14.808247 | orchestrator | 2026-04-09 01:12:14.808251 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-09 01:12:14.808255 | orchestrator | Thursday 09 April 2026 01:10:51 +0000 (0:00:00.273) 0:03:37.834 ******** 2026-04-09 01:12:14.808259 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808263 | orchestrator | 2026-04-09 01:12:14.808267 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-09 01:12:14.808270 | orchestrator | Thursday 09 April 2026 01:10:53 +0000 (0:00:02.141) 0:03:39.976 ******** 2026-04-09 01:12:14.808274 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808278 | orchestrator | 2026-04-09 01:12:14.808282 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-09 01:12:14.808286 | orchestrator | Thursday 09 April 2026 01:10:55 +0000 (0:00:02.052) 0:03:42.029 ******** 2026-04-09 01:12:14.808289 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808293 | orchestrator | 2026-04-09 01:12:14.808297 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-09 01:12:14.808301 | orchestrator | Thursday 09 April 2026 01:10:58 +0000 (0:00:02.974) 0:03:45.003 ******** 2026-04-09 01:12:14.808305 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808308 | orchestrator | 2026-04-09 01:12:14.808312 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-09 01:12:14.808316 | orchestrator | Thursday 09 April 2026 01:11:01 +0000 (0:00:02.887) 0:03:47.891 ******** 2026-04-09 01:12:14.808320 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808323 | orchestrator | 2026-04-09 01:12:14.808327 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 01:12:14.808331 | orchestrator | Thursday 09 April 2026 01:11:21 +0000 (0:00:20.442) 0:04:08.333 ******** 2026-04-09 01:12:14.808335 | orchestrator | 2026-04-09 01:12:14.808339 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 01:12:14.808343 | orchestrator | Thursday 09 April 2026 01:11:21 +0000 (0:00:00.082) 0:04:08.415 ******** 2026-04-09 01:12:14.808346 | orchestrator | 2026-04-09 01:12:14.808350 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 01:12:14.808354 | orchestrator | Thursday 09 April 2026 01:11:22 +0000 (0:00:00.066) 0:04:08.481 ******** 2026-04-09 01:12:14.808358 | orchestrator | 2026-04-09 01:12:14.808362 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-09 01:12:14.808365 | orchestrator | Thursday 09 April 2026 01:11:22 +0000 (0:00:00.068) 0:04:08.550 ******** 2026-04-09 01:12:14.808372 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808376 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.808380 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.808384 | orchestrator | 2026-04-09 01:12:14.808388 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-09 01:12:14.808394 | orchestrator | Thursday 09 April 2026 01:11:31 +0000 (0:00:09.675) 0:04:18.226 ******** 2026-04-09 01:12:14.808400 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.808407 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808413 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.808418 | orchestrator | 2026-04-09 01:12:14.808425 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-09 01:12:14.808431 | orchestrator | Thursday 09 April 2026 01:11:43 +0000 (0:00:11.608) 0:04:29.834 ******** 2026-04-09 01:12:14.808442 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.808451 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.808457 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808463 | orchestrator | 2026-04-09 01:12:14.808468 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-09 01:12:14.808474 | orchestrator | Thursday 09 April 2026 01:11:51 +0000 (0:00:08.534) 0:04:38.369 ******** 2026-04-09 01:12:14.808479 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808485 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.808490 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.808496 | orchestrator | 2026-04-09 01:12:14.808502 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-09 01:12:14.808508 | orchestrator | Thursday 09 April 2026 01:12:02 +0000 (0:00:10.847) 0:04:49.216 ******** 2026-04-09 01:12:14.808513 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:12:14.808518 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:12:14.808524 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:12:14.808530 | orchestrator | 2026-04-09 01:12:14.808536 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:12:14.808543 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 01:12:14.808553 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:12:14.808582 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:12:14.808587 | orchestrator | 2026-04-09 01:12:14.808593 | orchestrator | 2026-04-09 01:12:14.808602 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:12:14.808608 | orchestrator | Thursday 09 April 2026 01:12:12 +0000 (0:00:10.171) 0:04:59.388 ******** 2026-04-09 01:12:14.808614 | orchestrator | =============================================================================== 2026-04-09 01:12:14.808620 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.44s 2026-04-09 01:12:14.808626 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.03s 2026-04-09 01:12:14.808632 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.75s 2026-04-09 01:12:14.808638 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.21s 2026-04-09 01:12:14.808644 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.61s 2026-04-09 01:12:14.808651 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.85s 2026-04-09 01:12:14.808658 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.17s 2026-04-09 01:12:14.808662 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.03s 2026-04-09 01:12:14.808665 | orchestrator | octavia : Restart octavia-api container --------------------------------- 9.68s 2026-04-09 01:12:14.808676 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.53s 2026-04-09 01:12:14.808679 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.11s 2026-04-09 01:12:14.808683 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 8.07s 2026-04-09 01:12:14.808687 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.98s 2026-04-09 01:12:14.808691 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 7.37s 2026-04-09 01:12:14.808694 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.37s 2026-04-09 01:12:14.808698 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.98s 2026-04-09 01:12:14.808702 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.86s 2026-04-09 01:12:14.808706 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.61s 2026-04-09 01:12:14.808710 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.58s 2026-04-09 01:12:14.808714 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.38s 2026-04-09 01:12:14.808718 | orchestrator | 2026-04-09 01:12:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:17.847772 | orchestrator | 2026-04-09 01:12:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:20.891315 | orchestrator | 2026-04-09 01:12:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:23.935164 | orchestrator | 2026-04-09 01:12:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:26.980333 | orchestrator | 2026-04-09 01:12:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:30.019667 | orchestrator | 2026-04-09 01:12:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:33.062918 | orchestrator | 2026-04-09 01:12:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:36.101872 | orchestrator | 2026-04-09 01:12:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:39.140406 | orchestrator | 2026-04-09 01:12:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:42.177054 | orchestrator | 2026-04-09 01:12:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:45.219378 | orchestrator | 2026-04-09 01:12:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:48.261315 | orchestrator | 2026-04-09 01:12:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:51.300560 | orchestrator | 2026-04-09 01:12:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:54.344513 | orchestrator | 2026-04-09 01:12:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:57.387772 | orchestrator | 2026-04-09 01:12:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:13:00.426550 | orchestrator | 2026-04-09 01:13:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:13:03.465157 | orchestrator | 2026-04-09 01:13:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:13:06.508817 | orchestrator | 2026-04-09 01:13:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:13:09.549695 | orchestrator | 2026-04-09 01:13:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:13:12.596642 | orchestrator | 2026-04-09 01:13:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:13:15.635511 | orchestrator | 2026-04-09 01:13:15.802438 | orchestrator | 2026-04-09 01:13:15.809896 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Apr 9 01:13:15 UTC 2026 2026-04-09 01:13:15.809953 | orchestrator | 2026-04-09 01:13:16.230012 | orchestrator | ok: Runtime: 0:32:25.912593 2026-04-09 01:13:16.495895 | 2026-04-09 01:13:16.496039 | TASK [Bootstrap services] 2026-04-09 01:13:17.263019 | orchestrator | 2026-04-09 01:13:17.263156 | orchestrator | # BOOTSTRAP 2026-04-09 01:13:17.263167 | orchestrator | 2026-04-09 01:13:17.263173 | orchestrator | + set -e 2026-04-09 01:13:17.263178 | orchestrator | + echo 2026-04-09 01:13:17.263184 | orchestrator | + echo '# BOOTSTRAP' 2026-04-09 01:13:17.263192 | orchestrator | + echo 2026-04-09 01:13:17.263221 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-09 01:13:17.273830 | orchestrator | + set -e 2026-04-09 01:13:17.273924 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-09 01:13:21.934937 | orchestrator | 2026-04-09 01:13:21 | INFO  | It takes a moment until task d9468c0b-5aeb-4614-b8cc-f6c34e2d7b8b (flavor-manager) has been started and output is visible here. 2026-04-09 01:13:31.241684 | orchestrator | 2026-04-09 01:13:26 | INFO  | Flavor SCS-1L-1 created 2026-04-09 01:13:31.241796 | orchestrator | 2026-04-09 01:13:26 | INFO  | Flavor SCS-1L-1-5 created 2026-04-09 01:13:31.241808 | orchestrator | 2026-04-09 01:13:27 | INFO  | Flavor SCS-1V-2 created 2026-04-09 01:13:31.241812 | orchestrator | 2026-04-09 01:13:27 | INFO  | Flavor SCS-1V-2-5 created 2026-04-09 01:13:31.241817 | orchestrator | 2026-04-09 01:13:27 | INFO  | Flavor SCS-1V-4 created 2026-04-09 01:13:31.241821 | orchestrator | 2026-04-09 01:13:27 | INFO  | Flavor SCS-1V-4-10 created 2026-04-09 01:13:31.241826 | orchestrator | 2026-04-09 01:13:27 | INFO  | Flavor SCS-1V-8 created 2026-04-09 01:13:31.241830 | orchestrator | 2026-04-09 01:13:27 | INFO  | Flavor SCS-1V-8-20 created 2026-04-09 01:13:31.241840 | orchestrator | 2026-04-09 01:13:27 | INFO  | Flavor SCS-2V-4 created 2026-04-09 01:13:31.241844 | orchestrator | 2026-04-09 01:13:28 | INFO  | Flavor SCS-2V-4-10 created 2026-04-09 01:13:31.241848 | orchestrator | 2026-04-09 01:13:28 | INFO  | Flavor SCS-2V-8 created 2026-04-09 01:13:31.241852 | orchestrator | 2026-04-09 01:13:28 | INFO  | Flavor SCS-2V-8-20 created 2026-04-09 01:13:31.241856 | orchestrator | 2026-04-09 01:13:28 | INFO  | Flavor SCS-2V-16 created 2026-04-09 01:13:31.241860 | orchestrator | 2026-04-09 01:13:28 | INFO  | Flavor SCS-2V-16-50 created 2026-04-09 01:13:31.241864 | orchestrator | 2026-04-09 01:13:28 | INFO  | Flavor SCS-4V-8 created 2026-04-09 01:13:31.241868 | orchestrator | 2026-04-09 01:13:28 | INFO  | Flavor SCS-4V-8-20 created 2026-04-09 01:13:31.241872 | orchestrator | 2026-04-09 01:13:29 | INFO  | Flavor SCS-4V-16 created 2026-04-09 01:13:31.241875 | orchestrator | 2026-04-09 01:13:29 | INFO  | Flavor SCS-4V-16-50 created 2026-04-09 01:13:31.241879 | orchestrator | 2026-04-09 01:13:29 | INFO  | Flavor SCS-4V-32 created 2026-04-09 01:13:31.241883 | orchestrator | 2026-04-09 01:13:29 | INFO  | Flavor SCS-4V-32-100 created 2026-04-09 01:13:31.241894 | orchestrator | 2026-04-09 01:13:29 | INFO  | Flavor SCS-8V-16 created 2026-04-09 01:13:31.241898 | orchestrator | 2026-04-09 01:13:29 | INFO  | Flavor SCS-8V-16-50 created 2026-04-09 01:13:31.241907 | orchestrator | 2026-04-09 01:13:29 | INFO  | Flavor SCS-8V-32 created 2026-04-09 01:13:31.241911 | orchestrator | 2026-04-09 01:13:30 | INFO  | Flavor SCS-8V-32-100 created 2026-04-09 01:13:31.241915 | orchestrator | 2026-04-09 01:13:30 | INFO  | Flavor SCS-16V-32 created 2026-04-09 01:13:31.241919 | orchestrator | 2026-04-09 01:13:30 | INFO  | Flavor SCS-16V-32-100 created 2026-04-09 01:13:31.241922 | orchestrator | 2026-04-09 01:13:30 | INFO  | Flavor SCS-2V-4-20s created 2026-04-09 01:13:31.241926 | orchestrator | 2026-04-09 01:13:30 | INFO  | Flavor SCS-4V-8-50s created 2026-04-09 01:13:31.241930 | orchestrator | 2026-04-09 01:13:30 | INFO  | Flavor SCS-4V-16-100s created 2026-04-09 01:13:31.241934 | orchestrator | 2026-04-09 01:13:31 | INFO  | Flavor SCS-8V-32-100s created 2026-04-09 01:13:32.755848 | orchestrator | 2026-04-09 01:13:32 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-09 01:13:42.850408 | orchestrator | 2026-04-09 01:13:42 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-09 01:13:42.931181 | orchestrator | 2026-04-09 01:13:42 | INFO  | Task 7243d9f0-5b26-4f47-abfd-882ca3e7873b (bootstrap-basic) was prepared for execution. 2026-04-09 01:13:42.931273 | orchestrator | 2026-04-09 01:13:42 | INFO  | It takes a moment until task 7243d9f0-5b26-4f47-abfd-882ca3e7873b (bootstrap-basic) has been started and output is visible here. 2026-04-09 01:14:29.379306 | orchestrator | 2026-04-09 01:14:29.379383 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-09 01:14:29.379390 | orchestrator | 2026-04-09 01:14:29.379395 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:14:29.379399 | orchestrator | Thursday 09 April 2026 01:13:46 +0000 (0:00:00.106) 0:00:00.106 ******** 2026-04-09 01:14:29.379404 | orchestrator | ok: [localhost] 2026-04-09 01:14:29.379409 | orchestrator | 2026-04-09 01:14:29.379413 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-09 01:14:29.379418 | orchestrator | Thursday 09 April 2026 01:13:48 +0000 (0:00:01.933) 0:00:02.039 ******** 2026-04-09 01:14:29.379423 | orchestrator | ok: [localhost] 2026-04-09 01:14:29.379427 | orchestrator | 2026-04-09 01:14:29.379431 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-09 01:14:29.379435 | orchestrator | Thursday 09 April 2026 01:13:57 +0000 (0:00:09.332) 0:00:11.371 ******** 2026-04-09 01:14:29.379439 | orchestrator | changed: [localhost] 2026-04-09 01:14:29.379444 | orchestrator | 2026-04-09 01:14:29.379448 | orchestrator | TASK [Create public network] *************************************************** 2026-04-09 01:14:29.379451 | orchestrator | Thursday 09 April 2026 01:14:05 +0000 (0:00:08.137) 0:00:19.509 ******** 2026-04-09 01:14:29.379455 | orchestrator | changed: [localhost] 2026-04-09 01:14:29.379459 | orchestrator | 2026-04-09 01:14:29.379465 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-09 01:14:29.379469 | orchestrator | Thursday 09 April 2026 01:14:10 +0000 (0:00:05.423) 0:00:24.932 ******** 2026-04-09 01:14:29.379473 | orchestrator | changed: [localhost] 2026-04-09 01:14:29.379477 | orchestrator | 2026-04-09 01:14:29.379481 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-09 01:14:29.379485 | orchestrator | Thursday 09 April 2026 01:14:17 +0000 (0:00:06.299) 0:00:31.232 ******** 2026-04-09 01:14:29.379489 | orchestrator | changed: [localhost] 2026-04-09 01:14:29.379493 | orchestrator | 2026-04-09 01:14:29.379496 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-09 01:14:29.379500 | orchestrator | Thursday 09 April 2026 01:14:21 +0000 (0:00:04.489) 0:00:35.721 ******** 2026-04-09 01:14:29.379504 | orchestrator | changed: [localhost] 2026-04-09 01:14:29.379508 | orchestrator | 2026-04-09 01:14:29.379512 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-09 01:14:29.379520 | orchestrator | Thursday 09 April 2026 01:14:25 +0000 (0:00:03.774) 0:00:39.496 ******** 2026-04-09 01:14:29.379524 | orchestrator | ok: [localhost] 2026-04-09 01:14:29.379528 | orchestrator | 2026-04-09 01:14:29.379532 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:14:29.379536 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:14:29.379541 | orchestrator | 2026-04-09 01:14:29.379545 | orchestrator | 2026-04-09 01:14:29.379549 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:14:29.379552 | orchestrator | Thursday 09 April 2026 01:14:29 +0000 (0:00:03.662) 0:00:43.159 ******** 2026-04-09 01:14:29.379558 | orchestrator | =============================================================================== 2026-04-09 01:14:29.379565 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.33s 2026-04-09 01:14:29.379591 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.14s 2026-04-09 01:14:29.379597 | orchestrator | Set public network to default ------------------------------------------- 6.30s 2026-04-09 01:14:29.379604 | orchestrator | Create public network --------------------------------------------------- 5.42s 2026-04-09 01:14:29.379611 | orchestrator | Create public subnet ---------------------------------------------------- 4.49s 2026-04-09 01:14:29.379617 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.77s 2026-04-09 01:14:29.379624 | orchestrator | Create manager role ----------------------------------------------------- 3.66s 2026-04-09 01:14:29.379630 | orchestrator | Gathering Facts --------------------------------------------------------- 1.93s 2026-04-09 01:14:31.352807 | orchestrator | 2026-04-09 01:14:31 | INFO  | It takes a moment until task d5cd24a4-96e9-4a8c-a53d-59bd330b0cba (image-manager) has been started and output is visible here. 2026-04-09 01:15:12.872233 | orchestrator | 2026-04-09 01:14:34 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-09 01:15:12.872313 | orchestrator | 2026-04-09 01:14:34 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-09 01:15:12.872322 | orchestrator | 2026-04-09 01:14:34 | INFO  | Importing image Cirros 0.6.2 2026-04-09 01:15:12.872327 | orchestrator | 2026-04-09 01:14:34 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-09 01:15:12.872334 | orchestrator | 2026-04-09 01:14:36 | INFO  | Waiting for image to leave queued state... 2026-04-09 01:15:12.872339 | orchestrator | 2026-04-09 01:14:40 | INFO  | Waiting for import to complete... 2026-04-09 01:15:12.872343 | orchestrator | 2026-04-09 01:14:50 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-09 01:15:12.872347 | orchestrator | 2026-04-09 01:14:50 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-09 01:15:12.872352 | orchestrator | 2026-04-09 01:14:50 | INFO  | Setting internal_version = 0.6.2 2026-04-09 01:15:12.872356 | orchestrator | 2026-04-09 01:14:50 | INFO  | Setting image_original_user = cirros 2026-04-09 01:15:12.872360 | orchestrator | 2026-04-09 01:14:50 | INFO  | Adding tag os:cirros 2026-04-09 01:15:12.872364 | orchestrator | 2026-04-09 01:14:50 | INFO  | Setting property architecture: x86_64 2026-04-09 01:15:12.872368 | orchestrator | 2026-04-09 01:14:51 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 01:15:12.872372 | orchestrator | 2026-04-09 01:14:51 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 01:15:12.872376 | orchestrator | 2026-04-09 01:14:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 01:15:12.872380 | orchestrator | 2026-04-09 01:14:51 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 01:15:12.872384 | orchestrator | 2026-04-09 01:14:52 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 01:15:12.872393 | orchestrator | 2026-04-09 01:14:52 | INFO  | Setting property os_distro: cirros 2026-04-09 01:15:12.872397 | orchestrator | 2026-04-09 01:14:52 | INFO  | Setting property os_purpose: minimal 2026-04-09 01:15:12.872401 | orchestrator | 2026-04-09 01:14:52 | INFO  | Setting property replace_frequency: never 2026-04-09 01:15:12.872405 | orchestrator | 2026-04-09 01:14:52 | INFO  | Setting property uuid_validity: none 2026-04-09 01:15:12.872409 | orchestrator | 2026-04-09 01:14:52 | INFO  | Setting property provided_until: none 2026-04-09 01:15:12.872413 | orchestrator | 2026-04-09 01:14:53 | INFO  | Setting property image_description: Cirros 2026-04-09 01:15:12.872416 | orchestrator | 2026-04-09 01:14:53 | INFO  | Setting property image_name: Cirros 2026-04-09 01:15:12.872436 | orchestrator | 2026-04-09 01:14:53 | INFO  | Setting property internal_version: 0.6.2 2026-04-09 01:15:12.872440 | orchestrator | 2026-04-09 01:14:53 | INFO  | Setting property image_original_user: cirros 2026-04-09 01:15:12.872444 | orchestrator | 2026-04-09 01:14:53 | INFO  | Setting property os_version: 0.6.2 2026-04-09 01:15:12.872448 | orchestrator | 2026-04-09 01:14:54 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-09 01:15:12.872454 | orchestrator | 2026-04-09 01:14:54 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-09 01:15:12.872458 | orchestrator | 2026-04-09 01:14:54 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-09 01:15:12.872461 | orchestrator | 2026-04-09 01:14:54 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-09 01:15:12.872467 | orchestrator | 2026-04-09 01:14:54 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-09 01:15:12.872471 | orchestrator | 2026-04-09 01:14:54 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-09 01:15:12.872475 | orchestrator | 2026-04-09 01:14:54 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-09 01:15:12.872479 | orchestrator | 2026-04-09 01:14:54 | INFO  | Importing image Cirros 0.6.3 2026-04-09 01:15:12.872483 | orchestrator | 2026-04-09 01:14:54 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-09 01:15:12.872487 | orchestrator | 2026-04-09 01:14:56 | INFO  | Waiting for image to leave queued state... 2026-04-09 01:15:12.872491 | orchestrator | 2026-04-09 01:14:58 | INFO  | Waiting for import to complete... 2026-04-09 01:15:12.872504 | orchestrator | 2026-04-09 01:15:08 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-09 01:15:12.872508 | orchestrator | 2026-04-09 01:15:08 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-09 01:15:12.872512 | orchestrator | 2026-04-09 01:15:08 | INFO  | Setting internal_version = 0.6.3 2026-04-09 01:15:12.872516 | orchestrator | 2026-04-09 01:15:08 | INFO  | Setting image_original_user = cirros 2026-04-09 01:15:12.872519 | orchestrator | 2026-04-09 01:15:08 | INFO  | Adding tag os:cirros 2026-04-09 01:15:12.872523 | orchestrator | 2026-04-09 01:15:08 | INFO  | Setting property architecture: x86_64 2026-04-09 01:15:12.872527 | orchestrator | 2026-04-09 01:15:08 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 01:15:12.872531 | orchestrator | 2026-04-09 01:15:09 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 01:15:12.872535 | orchestrator | 2026-04-09 01:15:09 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 01:15:12.872539 | orchestrator | 2026-04-09 01:15:09 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 01:15:12.872542 | orchestrator | 2026-04-09 01:15:09 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 01:15:12.872546 | orchestrator | 2026-04-09 01:15:09 | INFO  | Setting property os_distro: cirros 2026-04-09 01:15:12.872550 | orchestrator | 2026-04-09 01:15:10 | INFO  | Setting property os_purpose: minimal 2026-04-09 01:15:12.872554 | orchestrator | 2026-04-09 01:15:10 | INFO  | Setting property replace_frequency: never 2026-04-09 01:15:12.872558 | orchestrator | 2026-04-09 01:15:10 | INFO  | Setting property uuid_validity: none 2026-04-09 01:15:12.872562 | orchestrator | 2026-04-09 01:15:10 | INFO  | Setting property provided_until: none 2026-04-09 01:15:12.872566 | orchestrator | 2026-04-09 01:15:10 | INFO  | Setting property image_description: Cirros 2026-04-09 01:15:12.872574 | orchestrator | 2026-04-09 01:15:11 | INFO  | Setting property image_name: Cirros 2026-04-09 01:15:12.872578 | orchestrator | 2026-04-09 01:15:11 | INFO  | Setting property internal_version: 0.6.3 2026-04-09 01:15:12.872581 | orchestrator | 2026-04-09 01:15:11 | INFO  | Setting property image_original_user: cirros 2026-04-09 01:15:12.872585 | orchestrator | 2026-04-09 01:15:11 | INFO  | Setting property os_version: 0.6.3 2026-04-09 01:15:12.872589 | orchestrator | 2026-04-09 01:15:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-09 01:15:12.872593 | orchestrator | 2026-04-09 01:15:12 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-09 01:15:12.872597 | orchestrator | 2026-04-09 01:15:12 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-09 01:15:12.872601 | orchestrator | 2026-04-09 01:15:12 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-09 01:15:12.872604 | orchestrator | 2026-04-09 01:15:12 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-09 01:15:13.108851 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-09 01:15:15.100064 | orchestrator | 2026-04-09 01:15:15 | INFO  | date: 2026-04-07 2026-04-09 01:15:15.100203 | orchestrator | 2026-04-09 01:15:15 | INFO  | image: octavia-amphora-haproxy-2025.1.20260407.qcow2 2026-04-09 01:15:15.100239 | orchestrator | 2026-04-09 01:15:15 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260407.qcow2 2026-04-09 01:15:15.100250 | orchestrator | 2026-04-09 01:15:15 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260407.qcow2.CHECKSUM 2026-04-09 01:15:15.285000 | orchestrator | 2026-04-09 01:15:15 | INFO  | checksum: 405c123a107be91a6a827c72bff835c7eb05d0295aed77b18dccaabd39d13a0c 2026-04-09 01:15:15.372621 | orchestrator | 2026-04-09 01:15:15 | INFO  | It takes a moment until task 386ede50-f852-4d1b-98d1-0139260fc88f (image-manager) has been started and output is visible here. 2026-04-09 01:16:16.015874 | orchestrator | 2026-04-09 01:15:17 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:16:16.015960 | orchestrator | 2026-04-09 01:15:17 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260407.qcow2: 200 2026-04-09 01:16:16.015971 | orchestrator | 2026-04-09 01:15:17 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-07 2026-04-09 01:16:16.015978 | orchestrator | 2026-04-09 01:15:17 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260407.qcow2 2026-04-09 01:16:16.015986 | orchestrator | 2026-04-09 01:15:19 | INFO  | Waiting for image to leave queued state... 2026-04-09 01:16:16.015992 | orchestrator | 2026-04-09 01:15:21 | INFO  | Waiting for import to complete... 2026-04-09 01:16:16.015999 | orchestrator | 2026-04-09 01:15:31 | INFO  | Waiting for import to complete... 2026-04-09 01:16:16.016005 | orchestrator | 2026-04-09 01:15:41 | INFO  | Waiting for import to complete... 2026-04-09 01:16:16.016011 | orchestrator | 2026-04-09 01:15:51 | INFO  | Waiting for import to complete... 2026-04-09 01:16:16.016019 | orchestrator | 2026-04-09 01:16:02 | INFO  | Waiting for import to complete... 2026-04-09 01:16:16.016025 | orchestrator | 2026-04-09 01:16:12 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-07' successfully completed, reloading images 2026-04-09 01:16:16.016052 | orchestrator | 2026-04-09 01:16:12 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:16:16.016060 | orchestrator | 2026-04-09 01:16:12 | INFO  | Setting internal_version = 2026-04-07 2026-04-09 01:16:16.016066 | orchestrator | 2026-04-09 01:16:12 | INFO  | Setting image_original_user = ubuntu 2026-04-09 01:16:16.016072 | orchestrator | 2026-04-09 01:16:12 | INFO  | Adding tag amphora 2026-04-09 01:16:16.016078 | orchestrator | 2026-04-09 01:16:12 | INFO  | Adding tag os:ubuntu 2026-04-09 01:16:16.016084 | orchestrator | 2026-04-09 01:16:12 | INFO  | Setting property architecture: x86_64 2026-04-09 01:16:16.016103 | orchestrator | 2026-04-09 01:16:13 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 01:16:16.016110 | orchestrator | 2026-04-09 01:16:13 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 01:16:16.016116 | orchestrator | 2026-04-09 01:16:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 01:16:16.016122 | orchestrator | 2026-04-09 01:16:13 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 01:16:16.016128 | orchestrator | 2026-04-09 01:16:13 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 01:16:16.016134 | orchestrator | 2026-04-09 01:16:13 | INFO  | Setting property os_distro: ubuntu 2026-04-09 01:16:16.016140 | orchestrator | 2026-04-09 01:16:13 | INFO  | Setting property replace_frequency: quarterly 2026-04-09 01:16:16.016146 | orchestrator | 2026-04-09 01:16:14 | INFO  | Setting property uuid_validity: last-1 2026-04-09 01:16:16.016151 | orchestrator | 2026-04-09 01:16:14 | INFO  | Setting property provided_until: none 2026-04-09 01:16:16.016157 | orchestrator | 2026-04-09 01:16:14 | INFO  | Setting property os_purpose: network 2026-04-09 01:16:16.016163 | orchestrator | 2026-04-09 01:16:14 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-09 01:16:16.016181 | orchestrator | 2026-04-09 01:16:14 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-09 01:16:16.016187 | orchestrator | 2026-04-09 01:16:14 | INFO  | Setting property internal_version: 2026-04-07 2026-04-09 01:16:16.016193 | orchestrator | 2026-04-09 01:16:14 | INFO  | Setting property image_original_user: ubuntu 2026-04-09 01:16:16.016199 | orchestrator | 2026-04-09 01:16:15 | INFO  | Setting property os_version: 2026-04-07 2026-04-09 01:16:16.016205 | orchestrator | 2026-04-09 01:16:15 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260407.qcow2 2026-04-09 01:16:16.016211 | orchestrator | 2026-04-09 01:16:15 | INFO  | Setting property image_build_date: 2026-04-07 2026-04-09 01:16:16.016217 | orchestrator | 2026-04-09 01:16:15 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:16:16.016223 | orchestrator | 2026-04-09 01:16:15 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:16:16.016229 | orchestrator | 2026-04-09 01:16:15 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-09 01:16:16.016246 | orchestrator | 2026-04-09 01:16:15 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-09 01:16:16.016253 | orchestrator | 2026-04-09 01:16:15 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-09 01:16:16.016259 | orchestrator | 2026-04-09 01:16:15 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-09 01:16:16.689595 | orchestrator | ok: Runtime: 0:02:59.376937 2026-04-09 01:16:16.716370 | 2026-04-09 01:16:16.716534 | TASK [Run checks] 2026-04-09 01:16:17.450365 | orchestrator | + set -e 2026-04-09 01:16:17.450538 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:16:17.450558 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:16:17.450571 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:16:17.450578 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:16:17.450585 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:16:17.450594 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 01:16:17.451605 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:16:17.459605 | orchestrator | 2026-04-09 01:16:17.459712 | orchestrator | # CHECK 2026-04-09 01:16:17.459724 | orchestrator | 2026-04-09 01:16:17.459732 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:16:17.459744 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:16:17.459752 | orchestrator | + echo 2026-04-09 01:16:17.459771 | orchestrator | + echo '# CHECK' 2026-04-09 01:16:17.459785 | orchestrator | + echo 2026-04-09 01:16:17.459799 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:16:17.461025 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:16:17.522087 | orchestrator | 2026-04-09 01:16:17.522167 | orchestrator | ## Containers @ testbed-manager 2026-04-09 01:16:17.522174 | orchestrator | 2026-04-09 01:16:17.522180 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:16:17.522184 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:16:17.522189 | orchestrator | + echo 2026-04-09 01:16:17.522194 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-09 01:16:17.522199 | orchestrator | + echo 2026-04-09 01:16:17.522203 | orchestrator | + osism container testbed-manager ps 2026-04-09 01:16:18.592429 | orchestrator | 2026-04-09 01:16:18 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-09 01:16:18.984784 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:16:18.984896 | orchestrator | 0d80f7e8ae7e registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2026-04-09 01:16:18.984919 | orchestrator | 088c65b6c690 registry.osism.tech/kolla/prometheus-alertmanager:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2026-04-09 01:16:18.984928 | orchestrator | ccfb39e328f4 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-09 01:16:18.984932 | orchestrator | 47a990e8336d registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-09 01:16:18.984939 | orchestrator | a5b51d5c502c registry.osism.tech/kolla/prometheus-server:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2026-04-09 01:16:18.984943 | orchestrator | 13feaedab4fb registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2026-04-09 01:16:18.984947 | orchestrator | 78ceb6a422bd registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:16:18.984953 | orchestrator | 906edf9b751a registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:16:18.984982 | orchestrator | fdc99632dfee registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-09 01:16:18.984990 | orchestrator | 4f3b07468d6c phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2026-04-09 01:16:18.984996 | orchestrator | f82728bb31c3 registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2026-04-09 01:16:18.985004 | orchestrator | fb9d775347f5 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 30 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-04-09 01:16:18.985009 | orchestrator | 54bdf79e3bdb registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-09 01:16:18.985021 | orchestrator | c450649fe84c registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-04-09 01:16:18.985500 | orchestrator | fc0f3606dc61 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-04-09 01:16:18.985559 | orchestrator | bb69d178a663 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-ansible 2026-04-09 01:16:18.985577 | orchestrator | e7bf167712a1 registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-04-09 01:16:18.985600 | orchestrator | 117d5f833a8e registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-04-09 01:16:18.985605 | orchestrator | 7e2f0f0b55e6 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-09 01:16:18.985609 | orchestrator | 44089aa23ced registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-09 01:16:18.985613 | orchestrator | e36d992ffaf6 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-09 01:16:18.985617 | orchestrator | e607f1da3bab registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-04-09 01:16:18.985628 | orchestrator | e9ab88be6933 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-04-09 01:16:18.986161 | orchestrator | 70d36f26ff4c registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-04-09 01:16:18.986510 | orchestrator | bb06c485655e registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-04-09 01:16:18.986555 | orchestrator | b97b27780ed6 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-09 01:16:18.986563 | orchestrator | ed77bcfffbdc registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-04-09 01:16:18.986570 | orchestrator | 239b4be3ec49 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-04-09 01:16:18.986679 | orchestrator | 9a5eca125319 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-09 01:16:19.119603 | orchestrator | 2026-04-09 01:16:19.119688 | orchestrator | ## Images @ testbed-manager 2026-04-09 01:16:19.119696 | orchestrator | 2026-04-09 01:16:19.119701 | orchestrator | + echo 2026-04-09 01:16:19.119705 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-09 01:16:19.119710 | orchestrator | + echo 2026-04-09 01:16:19.119718 | orchestrator | + osism container testbed-manager images 2026-04-09 01:16:20.510779 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:16:20.510872 | orchestrator | registry.osism.tech/osism/osism-ansible latest 8a27fa143461 About an hour ago 638MB 2026-04-09 01:16:20.510881 | orchestrator | registry.osism.tech/osism/kolla-ansible 2025.1 098491cb5195 About an hour ago 636MB 2026-04-09 01:16:20.510888 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 67004d43352e About an hour ago 1.24GB 2026-04-09 01:16:20.510894 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 5f240ab7f93d About an hour ago 585MB 2026-04-09 01:16:20.510919 | orchestrator | registry.osism.tech/osism/osism latest 2600ae4320d1 About an hour ago 407MB 2026-04-09 01:16:20.510926 | orchestrator | registry.osism.tech/osism/osism-frontend latest 27838c614aea About an hour ago 212MB 2026-04-09 01:16:20.510932 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 4a7da13cbb1b About an hour ago 357MB 2026-04-09 01:16:20.510939 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 154dc916a86b 21 hours ago 213MB 2026-04-09 01:16:20.510946 | orchestrator | registry.osism.tech/osism/cephclient reef 46995ad16e22 21 hours ago 453MB 2026-04-09 01:16:20.510951 | orchestrator | registry.osism.tech/kolla/cron 2025.1 8cc7439130f3 22 hours ago 266MB 2026-04-09 01:16:20.510958 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 d0e2620fcfb0 22 hours ago 579MB 2026-04-09 01:16:20.510964 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 82d6207cb100 22 hours ago 672MB 2026-04-09 01:16:20.510971 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2025.1 c540284ffec7 22 hours ago 404MB 2026-04-09 01:16:20.510977 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 aec713c92010 22 hours ago 357MB 2026-04-09 01:16:20.511002 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2025.1 225e9c4c0045 22 hours ago 308MB 2026-04-09 01:16:20.511008 | orchestrator | registry.osism.tech/kolla/prometheus-server 2025.1 4094a6a5dd67 22 hours ago 849MB 2026-04-09 01:16:20.511014 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 85ea4ef075ce 22 hours ago 306MB 2026-04-09 01:16:20.511021 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-09 01:16:20.511027 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-09 01:16:20.511033 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-09 01:16:20.511039 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-09 01:16:20.511045 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-09 01:16:20.511051 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-09 01:16:20.511057 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-09 01:16:20.649410 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:16:20.649929 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:16:20.695044 | orchestrator | 2026-04-09 01:16:20.695120 | orchestrator | ## Containers @ testbed-node-0 2026-04-09 01:16:20.695131 | orchestrator | 2026-04-09 01:16:20.695139 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:16:20.695145 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:16:20.695151 | orchestrator | + echo 2026-04-09 01:16:20.695157 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-09 01:16:20.695164 | orchestrator | + echo 2026-04-09 01:16:20.695172 | orchestrator | + osism container testbed-node-0 ps 2026-04-09 01:16:22.188152 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:16:22.189329 | orchestrator | a7e6e6741874 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-09 01:16:22.189393 | orchestrator | 307b7be115c2 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-09 01:16:22.189403 | orchestrator | e83ecf8e3b50 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-09 01:16:22.189411 | orchestrator | 091e33c983d9 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-09 01:16:22.189418 | orchestrator | 04ccd09f4c95 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-09 01:16:22.189425 | orchestrator | 9e388a402b6d registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-09 01:16:22.189432 | orchestrator | 80d844694543 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-09 01:16:22.189461 | orchestrator | e49a619cba5a registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-09 01:16:22.189469 | orchestrator | 89cbc8401553 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-09 01:16:22.189493 | orchestrator | 5277a23d3510 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-09 01:16:22.189501 | orchestrator | d295621ec7ee registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-04-09 01:16:22.189508 | orchestrator | e2b205dbd4e0 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-09 01:16:22.189515 | orchestrator | 39f77c4df0e8 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-09 01:16:22.189522 | orchestrator | c7c7ee336362 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-09 01:16:22.189529 | orchestrator | 65177323daf3 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-09 01:16:22.189538 | orchestrator | b7931fd72a90 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-09 01:16:22.189546 | orchestrator | d2b6e38014e1 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_metadata 2026-04-09 01:16:22.189553 | orchestrator | 7a9acccaac10 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-09 01:16:22.189560 | orchestrator | c20af22943e3 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-09 01:16:22.189567 | orchestrator | ee16bf6f649d registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-09 01:16:22.189574 | orchestrator | 360fe09de5a9 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-09 01:16:22.189624 | orchestrator | b608d77b6dbe registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-09 01:16:22.189633 | orchestrator | a1e3248aa745 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-09 01:16:22.189640 | orchestrator | 6c2ae9ec00d8 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-09 01:16:22.189647 | orchestrator | 445497ad23bd registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-09 01:16:22.189658 | orchestrator | d16cb5ac296a registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-09 01:16:22.189665 | orchestrator | 12ae636abbed registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-09 01:16:22.189672 | orchestrator | 3d1d39139d96 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-09 01:16:22.189684 | orchestrator | 5e7afb53d121 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-09 01:16:22.189698 | orchestrator | 4f5fa31fd697 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-09 01:16:22.189706 | orchestrator | 5306c9915fe9 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-09 01:16:22.189713 | orchestrator | c9ff1225b62f registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-09 01:16:22.189720 | orchestrator | 078d7bf494de registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-09 01:16:22.189727 | orchestrator | cfe45be9ff4d registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-09 01:16:22.189735 | orchestrator | c28e1644fce0 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-09 01:16:22.189742 | orchestrator | bddfb5fd0f80 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2026-04-09 01:16:22.189750 | orchestrator | b0f348b68b81 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-09 01:16:22.189757 | orchestrator | b087c6928f19 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-09 01:16:22.189764 | orchestrator | 2291e6b01b18 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-09 01:16:22.189771 | orchestrator | fbd7576baacf registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2026-04-09 01:16:22.189778 | orchestrator | 9f604868a054 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-09 01:16:22.189785 | orchestrator | 48069c731750 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-04-09 01:16:22.189792 | orchestrator | d0546c041255 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-09 01:16:22.189799 | orchestrator | e0f1362098ce registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-09 01:16:22.189816 | orchestrator | 0b72ffb84776 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-09 01:16:22.189824 | orchestrator | d1262313c327 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2026-04-09 01:16:22.189832 | orchestrator | 2267f0ed09e2 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-09 01:16:22.189839 | orchestrator | 12153f265154 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db_relay_1 2026-04-09 01:16:22.189853 | orchestrator | a64f89fae28d registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-09 01:16:22.189862 | orchestrator | 6610970837ad registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-09 01:16:22.189869 | orchestrator | 16794aa129c0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-09 01:16:22.189876 | orchestrator | 65a77c9a82c3 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-09 01:16:22.189883 | orchestrator | 882bdf8ab97f registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-09 01:16:22.189889 | orchestrator | 738011b0c8a1 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-09 01:16:22.189900 | orchestrator | 6874e0196311 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-09 01:16:22.189906 | orchestrator | 7a5748e8c638 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-09 01:16:22.189912 | orchestrator | b15d9809254c registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-09 01:16:22.189919 | orchestrator | edf6fc02dba7 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-09 01:16:22.189926 | orchestrator | efd1ff025616 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:16:22.189933 | orchestrator | 5bc536b6da49 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 29 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:16:22.189940 | orchestrator | 124789e19a37 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-09 01:16:22.324743 | orchestrator | 2026-04-09 01:16:22.324848 | orchestrator | ## Images @ testbed-node-0 2026-04-09 01:16:22.324858 | orchestrator | 2026-04-09 01:16:22.324863 | orchestrator | + echo 2026-04-09 01:16:22.324867 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-09 01:16:22.324872 | orchestrator | + echo 2026-04-09 01:16:22.324876 | orchestrator | + osism container testbed-node-0 images 2026-04-09 01:16:23.805269 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:16:23.805399 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 c1990ad4dc4f 22 hours ago 1.53GB 2026-04-09 01:16:23.805408 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 7763ed4ad4b6 22 hours ago 277MB 2026-04-09 01:16:23.805413 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 b8b41f35882d 22 hours ago 339MB 2026-04-09 01:16:23.805417 | orchestrator | registry.osism.tech/kolla/cron 2025.1 8cc7439130f3 22 hours ago 266MB 2026-04-09 01:16:23.805422 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 bc6f91dda254 22 hours ago 1.03GB 2026-04-09 01:16:23.805426 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 a299670c4b91 22 hours ago 274MB 2026-04-09 01:16:23.805431 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 ea7627d4cd1e 22 hours ago 411MB 2026-04-09 01:16:23.805450 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 d0e2620fcfb0 22 hours ago 579MB 2026-04-09 01:16:23.805454 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 82d6207cb100 22 hours ago 672MB 2026-04-09 01:16:23.805458 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 ad840f55c52c 22 hours ago 266MB 2026-04-09 01:16:23.805462 | orchestrator | registry.osism.tech/kolla/redis 2025.1 3f75be59a6be 22 hours ago 273MB 2026-04-09 01:16:23.805466 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 14b8036ae4ac 22 hours ago 273MB 2026-04-09 01:16:23.805470 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 2b3fa85abca7 22 hours ago 1.19GB 2026-04-09 01:16:23.805487 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 d8b8e8f4b103 22 hours ago 452MB 2026-04-09 01:16:23.805491 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 27ae0b1671d3 22 hours ago 298MB 2026-04-09 01:16:23.805495 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 aec713c92010 22 hours ago 357MB 2026-04-09 01:16:23.805499 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 9638859b93dd 22 hours ago 292MB 2026-04-09 01:16:23.805503 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 891c8de7cf87 22 hours ago 301MB 2026-04-09 01:16:23.805507 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 85ea4ef075ce 22 hours ago 306MB 2026-04-09 01:16:23.805511 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 b1a77cb2526f 22 hours ago 282MB 2026-04-09 01:16:23.805515 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 4a0e07f195d0 22 hours ago 282MB 2026-04-09 01:16:23.805518 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 4f1c9c7205c7 22 hours ago 985MB 2026-04-09 01:16:23.805537 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 4d7849368a83 22 hours ago 1.42GB 2026-04-09 01:16:23.805541 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 4fbd1ca0b07b 22 hours ago 1.42GB 2026-04-09 01:16:23.805545 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 7f3588fcc9ac 22 hours ago 1.43GB 2026-04-09 01:16:23.805549 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 e8d052366361 22 hours ago 1.78GB 2026-04-09 01:16:23.805553 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 3c0c514bf4bc 22 hours ago 993MB 2026-04-09 01:16:23.805556 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 38fa20897261 22 hours ago 994MB 2026-04-09 01:16:23.805560 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 b3cb86f6c447 22 hours ago 994MB 2026-04-09 01:16:23.805564 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 63c118c02734 22 hours ago 1.23GB 2026-04-09 01:16:23.805568 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 b9452d5ec342 22 hours ago 1.04GB 2026-04-09 01:16:23.805572 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 95180877716f 22 hours ago 1.05GB 2026-04-09 01:16:23.805575 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 00e46ac40e16 22 hours ago 1.07GB 2026-04-09 01:16:23.805579 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 bb0e49a495d7 22 hours ago 1.14GB 2026-04-09 01:16:23.805583 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 5ff216e56f1f 22 hours ago 1.26GB 2026-04-09 01:16:23.805587 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2025.1 99b6a85479ee 22 hours ago 985MB 2026-04-09 01:16:23.805590 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2025.1 224267771749 22 hours ago 985MB 2026-04-09 01:16:23.805598 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 107c865682df 22 hours ago 1.04GB 2026-04-09 01:16:23.805602 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 206b69132b85 22 hours ago 1.06GB 2026-04-09 01:16:23.805606 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 918d51f00baa 22 hours ago 1.04GB 2026-04-09 01:16:23.805610 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 c01f15cfd75e 22 hours ago 1.06GB 2026-04-09 01:16:23.805613 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 1fa4863d56b4 22 hours ago 1.04GB 2026-04-09 01:16:23.805621 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 f78026e81aaa 22 hours ago 1.11GB 2026-04-09 01:16:23.805624 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 13934e15c58d 22 hours ago 998MB 2026-04-09 01:16:23.805628 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 bf1ee8547983 22 hours ago 993MB 2026-04-09 01:16:23.805632 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 cc74d24e35d5 22 hours ago 994MB 2026-04-09 01:16:23.805636 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 8f26214d031d 22 hours ago 994MB 2026-04-09 01:16:23.805639 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 fc223738e211 22 hours ago 998MB 2026-04-09 01:16:23.805643 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 3e10bcba355a 22 hours ago 994MB 2026-04-09 01:16:23.805647 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2025.1 ab1eb7b67b33 22 hours ago 1.01GB 2026-04-09 01:16:23.805651 | orchestrator | registry.osism.tech/kolla/skyline-console 2025.1 58c286ab108b 22 hours ago 1.06GB 2026-04-09 01:16:23.805654 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2025.1 9b11a6cbc65c 22 hours ago 983MB 2026-04-09 01:16:23.805973 | orchestrator | registry.osism.tech/kolla/aodh-listener 2025.1 e3df50971edc 22 hours ago 983MB 2026-04-09 01:16:23.806116 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2025.1 98b99c2a0687 22 hours ago 983MB 2026-04-09 01:16:23.806127 | orchestrator | registry.osism.tech/kolla/aodh-api 2025.1 73e7388d98b7 22 hours ago 983MB 2026-04-09 01:16:23.806132 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 7c9b9377bea4 22 hours ago 1.22GB 2026-04-09 01:16:23.806136 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 9a3e4fa9b011 22 hours ago 1.38GB 2026-04-09 01:16:23.806141 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 e4fcbcc8d910 22 hours ago 1.22GB 2026-04-09 01:16:23.806145 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 ff5b52665aff 22 hours ago 1.22GB 2026-04-09 01:16:23.806149 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 eff07914637d 22 hours ago 290MB 2026-04-09 01:16:23.806153 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 2d5f84e63291 22 hours ago 289MB 2026-04-09 01:16:23.806157 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 cb0cfbe98627 22 hours ago 289MB 2026-04-09 01:16:23.806161 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 6fb3774d0442 22 hours ago 289MB 2026-04-09 01:16:23.806164 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 b39f2e5f16dd 22 hours ago 290MB 2026-04-09 01:16:23.806169 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 07c006a220da 26 hours ago 1.56GB 2026-04-09 01:16:23.806173 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 45 hours ago 1.35GB 2026-04-09 01:16:23.952704 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:16:23.952829 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:16:24.001940 | orchestrator | 2026-04-09 01:16:24.002064 | orchestrator | ## Containers @ testbed-node-1 2026-04-09 01:16:24.002075 | orchestrator | 2026-04-09 01:16:24.002080 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:16:24.002085 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:16:24.002089 | orchestrator | + echo 2026-04-09 01:16:24.002094 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-09 01:16:24.002099 | orchestrator | + echo 2026-04-09 01:16:24.002103 | orchestrator | + osism container testbed-node-1 ps 2026-04-09 01:16:25.435950 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:16:25.436042 | orchestrator | 92a114aed3ab registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-09 01:16:25.436055 | orchestrator | 2f9fd81c145e registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-09 01:16:25.436073 | orchestrator | eb586e868af4 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-09 01:16:25.436087 | orchestrator | 40647941b087 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-09 01:16:25.436092 | orchestrator | 4c10599500ce registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-09 01:16:25.436099 | orchestrator | 913db68c57a3 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-09 01:16:25.436104 | orchestrator | 7e4665824a8d registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-09 01:16:25.436108 | orchestrator | fa68e6d6d5fb registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-09 01:16:25.436112 | orchestrator | 1f86f7ca47b2 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-09 01:16:25.436117 | orchestrator | e45f94e99cd8 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-09 01:16:25.436121 | orchestrator | 13d653765eae registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-09 01:16:25.436126 | orchestrator | e0380e83ed7d registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-09 01:16:25.436129 | orchestrator | 0b63cfd2fc50 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-09 01:16:25.436133 | orchestrator | 14e42e986c68 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-09 01:16:25.436137 | orchestrator | 7356aa83f2dd registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-09 01:16:25.436141 | orchestrator | a35986af66d2 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-09 01:16:25.436164 | orchestrator | bf10e91437c0 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-09 01:16:25.436168 | orchestrator | a42b252ca1f3 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_metadata 2026-04-09 01:16:25.436172 | orchestrator | 7cfb166a90fc registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-09 01:16:25.436176 | orchestrator | 042eca7e69b2 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-09 01:16:25.436180 | orchestrator | ff1c37188fd3 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-09 01:16:25.436196 | orchestrator | 9284a6f1238d registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-09 01:16:25.436200 | orchestrator | a366a5c986ee registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-09 01:16:25.436204 | orchestrator | 9e7af7afbbc8 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-09 01:16:25.436211 | orchestrator | 0efabe46e955 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-09 01:16:25.436215 | orchestrator | d7708e71832c registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-09 01:16:25.436218 | orchestrator | 8034b75dad87 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-09 01:16:25.436222 | orchestrator | 2faae0b499ac registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-09 01:16:25.436226 | orchestrator | b1b9b8f1f5e4 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-09 01:16:25.436230 | orchestrator | 4445abb05c21 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-09 01:16:25.436235 | orchestrator | 2294920ba500 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-09 01:16:25.436239 | orchestrator | 16411c76e5f2 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-09 01:16:25.436242 | orchestrator | 00cee5f6416e registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-09 01:16:25.436246 | orchestrator | 9455abe2ff8e registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-09 01:16:25.436250 | orchestrator | f87b14b92cd8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-04-09 01:16:25.436254 | orchestrator | 38885d69f6e4 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-09 01:16:25.436262 | orchestrator | d9c5cda1ae94 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-04-09 01:16:25.436266 | orchestrator | 2271e48ed6d8 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-09 01:16:25.436270 | orchestrator | ef9d5b103250 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-09 01:16:25.436273 | orchestrator | 088b048e8afd registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-09 01:16:25.436277 | orchestrator | bf29636de4a8 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 20 minutes ago Up 19 minutes (healthy) mariadb 2026-04-09 01:16:25.436281 | orchestrator | 835f9e11b57b registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-09 01:16:25.436285 | orchestrator | 77dbd57e5fcd registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-09 01:16:25.436289 | orchestrator | e0d78967faee registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-09 01:16:25.436295 | orchestrator | 6ce43f983f7d registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-09 01:16:25.436381 | orchestrator | 32be42061e8e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2026-04-09 01:16:25.436391 | orchestrator | b3f31885917d registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-09 01:16:25.436396 | orchestrator | 5d1a0ae7bd04 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db_relay_1 2026-04-09 01:16:25.436403 | orchestrator | dd98593217cb registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 23 minutes ovn_sb_db 2026-04-09 01:16:25.436414 | orchestrator | 2869cc6a4cb0 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 25 minutes ago Up 23 minutes ovn_nb_db 2026-04-09 01:16:25.436421 | orchestrator | 06ec576a4fe9 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-09 01:16:25.436427 | orchestrator | 6ae79e42e51a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2026-04-09 01:16:25.436433 | orchestrator | 4358af525ab5 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-09 01:16:25.436439 | orchestrator | 87ad43592d8b registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-09 01:16:25.436445 | orchestrator | bc8221836739 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-09 01:16:25.436453 | orchestrator | 789a7c853358 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-09 01:16:25.436463 | orchestrator | 297a683afd49 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-09 01:16:25.436467 | orchestrator | b49984230887 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-09 01:16:25.436471 | orchestrator | eeeefa0f6b02 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:16:25.436474 | orchestrator | 803bd7dafd58 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:16:25.436478 | orchestrator | e1098c23be75 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-09 01:16:25.570969 | orchestrator | 2026-04-09 01:16:25.571052 | orchestrator | ## Images @ testbed-node-1 2026-04-09 01:16:25.571064 | orchestrator | 2026-04-09 01:16:25.571071 | orchestrator | + echo 2026-04-09 01:16:25.571078 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-09 01:16:25.571086 | orchestrator | + echo 2026-04-09 01:16:25.571093 | orchestrator | + osism container testbed-node-1 images 2026-04-09 01:16:26.996778 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:16:26.996868 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 c1990ad4dc4f 22 hours ago 1.53GB 2026-04-09 01:16:26.996878 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 7763ed4ad4b6 22 hours ago 277MB 2026-04-09 01:16:26.996884 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 b8b41f35882d 22 hours ago 339MB 2026-04-09 01:16:26.996891 | orchestrator | registry.osism.tech/kolla/cron 2025.1 8cc7439130f3 22 hours ago 266MB 2026-04-09 01:16:26.996898 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 bc6f91dda254 22 hours ago 1.03GB 2026-04-09 01:16:26.996905 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 a299670c4b91 22 hours ago 274MB 2026-04-09 01:16:26.996911 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 ea7627d4cd1e 22 hours ago 411MB 2026-04-09 01:16:26.996918 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 d0e2620fcfb0 22 hours ago 579MB 2026-04-09 01:16:26.996924 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 82d6207cb100 22 hours ago 672MB 2026-04-09 01:16:26.996931 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 ad840f55c52c 22 hours ago 266MB 2026-04-09 01:16:26.996937 | orchestrator | registry.osism.tech/kolla/redis 2025.1 3f75be59a6be 22 hours ago 273MB 2026-04-09 01:16:26.996944 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 14b8036ae4ac 22 hours ago 273MB 2026-04-09 01:16:26.996951 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 2b3fa85abca7 22 hours ago 1.19GB 2026-04-09 01:16:26.996958 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 d8b8e8f4b103 22 hours ago 452MB 2026-04-09 01:16:26.996964 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 27ae0b1671d3 22 hours ago 298MB 2026-04-09 01:16:26.996971 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 aec713c92010 22 hours ago 357MB 2026-04-09 01:16:26.996978 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 9638859b93dd 22 hours ago 292MB 2026-04-09 01:16:26.996984 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 891c8de7cf87 22 hours ago 301MB 2026-04-09 01:16:26.997013 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 85ea4ef075ce 22 hours ago 306MB 2026-04-09 01:16:26.997021 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 b1a77cb2526f 22 hours ago 282MB 2026-04-09 01:16:26.997027 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 4f1c9c7205c7 22 hours ago 985MB 2026-04-09 01:16:26.997033 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 4a0e07f195d0 22 hours ago 282MB 2026-04-09 01:16:26.997039 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 4d7849368a83 22 hours ago 1.42GB 2026-04-09 01:16:26.997045 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 4fbd1ca0b07b 22 hours ago 1.42GB 2026-04-09 01:16:26.997051 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 7f3588fcc9ac 22 hours ago 1.43GB 2026-04-09 01:16:26.997074 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 e8d052366361 22 hours ago 1.78GB 2026-04-09 01:16:26.997080 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 3c0c514bf4bc 22 hours ago 993MB 2026-04-09 01:16:26.997087 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 38fa20897261 22 hours ago 994MB 2026-04-09 01:16:26.997092 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 b3cb86f6c447 22 hours ago 994MB 2026-04-09 01:16:26.997098 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 63c118c02734 22 hours ago 1.23GB 2026-04-09 01:16:26.997104 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 b9452d5ec342 22 hours ago 1.04GB 2026-04-09 01:16:26.997110 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 95180877716f 22 hours ago 1.05GB 2026-04-09 01:16:26.997116 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 00e46ac40e16 22 hours ago 1.07GB 2026-04-09 01:16:26.997122 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 bb0e49a495d7 22 hours ago 1.14GB 2026-04-09 01:16:26.997128 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 5ff216e56f1f 22 hours ago 1.26GB 2026-04-09 01:16:26.997134 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 107c865682df 22 hours ago 1.04GB 2026-04-09 01:16:26.997157 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 206b69132b85 22 hours ago 1.06GB 2026-04-09 01:16:26.997163 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 918d51f00baa 22 hours ago 1.04GB 2026-04-09 01:16:26.997169 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 c01f15cfd75e 22 hours ago 1.06GB 2026-04-09 01:16:26.997175 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 1fa4863d56b4 22 hours ago 1.04GB 2026-04-09 01:16:26.997181 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 f78026e81aaa 22 hours ago 1.11GB 2026-04-09 01:16:26.997187 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 13934e15c58d 22 hours ago 998MB 2026-04-09 01:16:26.997193 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 bf1ee8547983 22 hours ago 993MB 2026-04-09 01:16:26.997199 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 cc74d24e35d5 22 hours ago 994MB 2026-04-09 01:16:26.997204 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 8f26214d031d 22 hours ago 994MB 2026-04-09 01:16:26.997210 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 fc223738e211 22 hours ago 998MB 2026-04-09 01:16:26.997216 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 3e10bcba355a 22 hours ago 994MB 2026-04-09 01:16:26.997222 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 7c9b9377bea4 22 hours ago 1.22GB 2026-04-09 01:16:26.997236 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 9a3e4fa9b011 22 hours ago 1.38GB 2026-04-09 01:16:26.997242 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 e4fcbcc8d910 22 hours ago 1.22GB 2026-04-09 01:16:26.997248 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 ff5b52665aff 22 hours ago 1.22GB 2026-04-09 01:16:26.997254 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 eff07914637d 22 hours ago 290MB 2026-04-09 01:16:26.997260 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 2d5f84e63291 22 hours ago 289MB 2026-04-09 01:16:26.997266 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 cb0cfbe98627 22 hours ago 289MB 2026-04-09 01:16:26.997272 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 6fb3774d0442 22 hours ago 289MB 2026-04-09 01:16:26.997278 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 b39f2e5f16dd 22 hours ago 290MB 2026-04-09 01:16:26.997284 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 07c006a220da 26 hours ago 1.56GB 2026-04-09 01:16:26.997375 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 45 hours ago 1.35GB 2026-04-09 01:16:27.129421 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:16:27.129765 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:16:27.178962 | orchestrator | 2026-04-09 01:16:27.179047 | orchestrator | ## Containers @ testbed-node-2 2026-04-09 01:16:27.179057 | orchestrator | 2026-04-09 01:16:27.179063 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:16:27.179069 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:16:27.179075 | orchestrator | + echo 2026-04-09 01:16:27.179082 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-09 01:16:27.179088 | orchestrator | + echo 2026-04-09 01:16:27.179094 | orchestrator | + osism container testbed-node-2 ps 2026-04-09 01:16:28.559585 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:16:28.559637 | orchestrator | 2f3e3f56b7e5 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-09 01:16:28.559643 | orchestrator | 57f6c75115a1 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-09 01:16:28.559647 | orchestrator | 077b1bfe8f63 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-09 01:16:28.559651 | orchestrator | 0022cb90422c registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-09 01:16:28.559655 | orchestrator | 01fe0b914ad8 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-09 01:16:28.559659 | orchestrator | ded053d78bce registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-04-09 01:16:28.559663 | orchestrator | fa8ee178ddfb registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-09 01:16:28.559667 | orchestrator | 715f1e05faca registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-09 01:16:28.559671 | orchestrator | cfdfe2d5ac3a registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-09 01:16:28.559693 | orchestrator | 24d2a9c2e374 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 8 minutes grafana 2026-04-09 01:16:28.559699 | orchestrator | 18cf5ee34f2a registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-09 01:16:28.559703 | orchestrator | 02762bc63691 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-09 01:16:28.559707 | orchestrator | 41f895e6b736 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-09 01:16:28.559711 | orchestrator | 87cb917cfb6d registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-09 01:16:28.559715 | orchestrator | c60c1a422d22 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-09 01:16:28.559718 | orchestrator | b0ff17459b0e registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-09 01:16:28.559722 | orchestrator | a0c1579a0589 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-09 01:16:28.559726 | orchestrator | dbf6a3d001b0 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_metadata 2026-04-09 01:16:28.559730 | orchestrator | 7d717c146247 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-09 01:16:28.559733 | orchestrator | 2735f5e2833c registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-09 01:16:28.559737 | orchestrator | 0607c320c8bf registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-09 01:16:28.559748 | orchestrator | 39b1e052a9b0 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-09 01:16:28.559752 | orchestrator | a2f7c52f775d registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-09 01:16:28.559756 | orchestrator | a701f4c8a460 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-09 01:16:28.559760 | orchestrator | 1887acf28002 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-09 01:16:28.559764 | orchestrator | 5dc90799ad3f registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-09 01:16:28.559769 | orchestrator | 4f92d73f1184 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-09 01:16:28.559776 | orchestrator | 220ce0262178 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-09 01:16:28.559791 | orchestrator | 5f6043133679 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-09 01:16:28.559802 | orchestrator | 9a126099b369 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-09 01:16:28.559808 | orchestrator | 66030154ebec registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-09 01:16:28.559815 | orchestrator | ff928df342bb registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-09 01:16:28.559821 | orchestrator | a7212e017c23 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-09 01:16:28.559828 | orchestrator | 4bfd9ff9ec12 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-04-09 01:16:28.559834 | orchestrator | b2106e5226ba registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2026-04-09 01:16:28.559840 | orchestrator | 5e4ebd6140a6 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-09 01:16:28.559846 | orchestrator | f4d44fdb8e31 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-04-09 01:16:28.559853 | orchestrator | eee0329af309 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-09 01:16:28.559860 | orchestrator | ee527d41240b registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-09 01:16:28.559867 | orchestrator | 3201884493df registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-09 01:16:28.559871 | orchestrator | abfdf8aacac0 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-09 01:16:28.559875 | orchestrator | 0a4f250f59e2 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-09 01:16:28.559879 | orchestrator | dc1cb06c247f registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-09 01:16:28.559883 | orchestrator | eb267790fea6 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-09 01:16:28.559893 | orchestrator | 0acc518eb57b registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-09 01:16:28.559897 | orchestrator | 5388e5971f34 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-04-09 01:16:28.559900 | orchestrator | e28067b65e9f registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-09 01:16:28.559904 | orchestrator | a3140fba93a0 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db_relay_1 2026-04-09 01:16:28.559908 | orchestrator | f90cdafb3074 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 24 minutes ago Up 23 minutes ovn_sb_db 2026-04-09 01:16:28.559915 | orchestrator | bfd7b35e2638 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) rabbitmq 2026-04-09 01:16:28.559919 | orchestrator | b5822517d947 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 25 minutes ago Up 23 minutes ovn_nb_db 2026-04-09 01:16:28.559922 | orchestrator | dad3a035a931 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-04-09 01:16:28.559926 | orchestrator | 32fa5b33f440 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-09 01:16:28.559930 | orchestrator | 56e4d7f34bf6 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-09 01:16:28.559934 | orchestrator | c6442ff398c3 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-09 01:16:28.559937 | orchestrator | f4759679b717 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-09 01:16:28.559943 | orchestrator | 614e4d58ae82 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-09 01:16:28.559947 | orchestrator | 2202ac30a8ce registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-09 01:16:28.559950 | orchestrator | b9b0580fd237 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:16:28.559954 | orchestrator | 5bb25cb0b447 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:16:28.559958 | orchestrator | e2954f23c297 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-09 01:16:28.691747 | orchestrator | 2026-04-09 01:16:28.691809 | orchestrator | ## Images @ testbed-node-2 2026-04-09 01:16:28.691817 | orchestrator | 2026-04-09 01:16:28.691823 | orchestrator | + echo 2026-04-09 01:16:28.691828 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-09 01:16:28.691834 | orchestrator | + echo 2026-04-09 01:16:28.691841 | orchestrator | + osism container testbed-node-2 images 2026-04-09 01:16:30.078957 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:16:30.079008 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 c1990ad4dc4f 22 hours ago 1.53GB 2026-04-09 01:16:30.079014 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 7763ed4ad4b6 22 hours ago 277MB 2026-04-09 01:16:30.079018 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 b8b41f35882d 22 hours ago 339MB 2026-04-09 01:16:30.079022 | orchestrator | registry.osism.tech/kolla/cron 2025.1 8cc7439130f3 22 hours ago 266MB 2026-04-09 01:16:30.079029 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 bc6f91dda254 22 hours ago 1.03GB 2026-04-09 01:16:30.079035 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 a299670c4b91 22 hours ago 274MB 2026-04-09 01:16:30.079042 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 ea7627d4cd1e 22 hours ago 411MB 2026-04-09 01:16:30.079047 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 d0e2620fcfb0 22 hours ago 579MB 2026-04-09 01:16:30.079069 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 82d6207cb100 22 hours ago 672MB 2026-04-09 01:16:30.079076 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 ad840f55c52c 22 hours ago 266MB 2026-04-09 01:16:30.079082 | orchestrator | registry.osism.tech/kolla/redis 2025.1 3f75be59a6be 22 hours ago 273MB 2026-04-09 01:16:30.079089 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 14b8036ae4ac 22 hours ago 273MB 2026-04-09 01:16:30.079093 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 2b3fa85abca7 22 hours ago 1.19GB 2026-04-09 01:16:30.079097 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 d8b8e8f4b103 22 hours ago 452MB 2026-04-09 01:16:30.079101 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 27ae0b1671d3 22 hours ago 298MB 2026-04-09 01:16:30.079104 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 aec713c92010 22 hours ago 357MB 2026-04-09 01:16:30.079108 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 9638859b93dd 22 hours ago 292MB 2026-04-09 01:16:30.079112 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 891c8de7cf87 22 hours ago 301MB 2026-04-09 01:16:30.079116 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 85ea4ef075ce 22 hours ago 306MB 2026-04-09 01:16:30.079119 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 b1a77cb2526f 22 hours ago 282MB 2026-04-09 01:16:30.079123 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 4f1c9c7205c7 22 hours ago 985MB 2026-04-09 01:16:30.079127 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 4a0e07f195d0 22 hours ago 282MB 2026-04-09 01:16:30.079132 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 4d7849368a83 22 hours ago 1.42GB 2026-04-09 01:16:30.079138 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 4fbd1ca0b07b 22 hours ago 1.42GB 2026-04-09 01:16:30.079145 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 7f3588fcc9ac 22 hours ago 1.43GB 2026-04-09 01:16:30.079151 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 e8d052366361 22 hours ago 1.78GB 2026-04-09 01:16:30.079158 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 3c0c514bf4bc 22 hours ago 993MB 2026-04-09 01:16:30.079164 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 38fa20897261 22 hours ago 994MB 2026-04-09 01:16:30.079178 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 b3cb86f6c447 22 hours ago 994MB 2026-04-09 01:16:30.079189 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 63c118c02734 22 hours ago 1.23GB 2026-04-09 01:16:30.079196 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 b9452d5ec342 22 hours ago 1.04GB 2026-04-09 01:16:30.079203 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 95180877716f 22 hours ago 1.05GB 2026-04-09 01:16:30.079209 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 00e46ac40e16 22 hours ago 1.07GB 2026-04-09 01:16:30.079215 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 bb0e49a495d7 22 hours ago 1.14GB 2026-04-09 01:16:30.079222 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 5ff216e56f1f 22 hours ago 1.26GB 2026-04-09 01:16:30.079231 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 107c865682df 22 hours ago 1.04GB 2026-04-09 01:16:30.079260 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 206b69132b85 22 hours ago 1.06GB 2026-04-09 01:16:30.079269 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 918d51f00baa 22 hours ago 1.04GB 2026-04-09 01:16:30.079280 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 c01f15cfd75e 22 hours ago 1.06GB 2026-04-09 01:16:30.079287 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 1fa4863d56b4 22 hours ago 1.04GB 2026-04-09 01:16:30.079292 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 f78026e81aaa 22 hours ago 1.11GB 2026-04-09 01:16:30.079296 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 13934e15c58d 22 hours ago 998MB 2026-04-09 01:16:30.079300 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 bf1ee8547983 22 hours ago 993MB 2026-04-09 01:16:30.079304 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 cc74d24e35d5 22 hours ago 994MB 2026-04-09 01:16:30.079307 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 8f26214d031d 22 hours ago 994MB 2026-04-09 01:16:30.079311 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 fc223738e211 22 hours ago 998MB 2026-04-09 01:16:30.079341 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 3e10bcba355a 22 hours ago 994MB 2026-04-09 01:16:30.079345 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 7c9b9377bea4 22 hours ago 1.22GB 2026-04-09 01:16:30.079349 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 9a3e4fa9b011 22 hours ago 1.38GB 2026-04-09 01:16:30.079352 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 e4fcbcc8d910 22 hours ago 1.22GB 2026-04-09 01:16:30.079356 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 ff5b52665aff 22 hours ago 1.22GB 2026-04-09 01:16:30.079362 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 eff07914637d 22 hours ago 290MB 2026-04-09 01:16:30.079371 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 2d5f84e63291 22 hours ago 289MB 2026-04-09 01:16:30.079377 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 cb0cfbe98627 22 hours ago 289MB 2026-04-09 01:16:30.079384 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 6fb3774d0442 22 hours ago 289MB 2026-04-09 01:16:30.079390 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 b39f2e5f16dd 22 hours ago 290MB 2026-04-09 01:16:30.079396 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 07c006a220da 26 hours ago 1.56GB 2026-04-09 01:16:30.079403 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 45 hours ago 1.35GB 2026-04-09 01:16:30.212150 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-09 01:16:30.222875 | orchestrator | + set -e 2026-04-09 01:16:30.222927 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:16:30.225674 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:16:30.225721 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:16:30.225729 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:16:30.225736 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:16:30.225742 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:16:30.225750 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:16:30.225756 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:16:30.225763 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:16:30.225770 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-09 01:16:30.225777 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-09 01:16:30.225783 | orchestrator | ++ export ARA=false 2026-04-09 01:16:30.225790 | orchestrator | ++ ARA=false 2026-04-09 01:16:30.225797 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:16:30.225804 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:16:30.225810 | orchestrator | ++ export TEMPEST=true 2026-04-09 01:16:30.225817 | orchestrator | ++ TEMPEST=true 2026-04-09 01:16:30.225824 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:16:30.225831 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:16:30.225838 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 01:16:30.225861 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 01:16:30.225868 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:16:30.225875 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:16:30.225881 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:16:30.225888 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:16:30.225895 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:16:30.225902 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:16:30.225909 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:16:30.225915 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:16:30.225929 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 01:16:30.225937 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-09 01:16:30.232426 | orchestrator | + set -e 2026-04-09 01:16:30.232475 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:16:30.232481 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:16:30.232486 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:16:30.232490 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:16:30.232494 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:16:30.232499 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 01:16:30.232920 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:16:30.236046 | orchestrator | 2026-04-09 01:16:30.236093 | orchestrator | # Ceph status 2026-04-09 01:16:30.236102 | orchestrator | 2026-04-09 01:16:30.236109 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:16:30.236117 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:16:30.236124 | orchestrator | + echo 2026-04-09 01:16:30.236130 | orchestrator | + echo '# Ceph status' 2026-04-09 01:16:30.236137 | orchestrator | + echo 2026-04-09 01:16:30.236144 | orchestrator | + ceph -s 2026-04-09 01:16:30.779097 | orchestrator | cluster: 2026-04-09 01:16:30.779152 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-09 01:16:30.779159 | orchestrator | health: HEALTH_OK 2026-04-09 01:16:30.779165 | orchestrator | 2026-04-09 01:16:30.779170 | orchestrator | services: 2026-04-09 01:16:30.779175 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2026-04-09 01:16:30.779180 | orchestrator | mgr: testbed-node-0(active, since 16m), standbys: testbed-node-1, testbed-node-2 2026-04-09 01:16:30.779185 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-09 01:16:30.779190 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2026-04-09 01:16:30.779195 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-09 01:16:30.779200 | orchestrator | 2026-04-09 01:16:30.779205 | orchestrator | data: 2026-04-09 01:16:30.779209 | orchestrator | volumes: 1/1 healthy 2026-04-09 01:16:30.779215 | orchestrator | pools: 14 pools, 401 pgs 2026-04-09 01:16:30.779223 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-09 01:16:30.779233 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-09 01:16:30.779246 | orchestrator | pgs: 401 active+clean 2026-04-09 01:16:30.779253 | orchestrator | 2026-04-09 01:16:30.822307 | orchestrator | 2026-04-09 01:16:30.822393 | orchestrator | # Ceph versions 2026-04-09 01:16:30.822401 | orchestrator | 2026-04-09 01:16:30.822405 | orchestrator | + echo 2026-04-09 01:16:30.822409 | orchestrator | + echo '# Ceph versions' 2026-04-09 01:16:30.822414 | orchestrator | + echo 2026-04-09 01:16:30.822418 | orchestrator | + ceph versions 2026-04-09 01:16:31.425486 | orchestrator | { 2026-04-09 01:16:31.425543 | orchestrator | "mon": { 2026-04-09 01:16:31.425552 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:31.425559 | orchestrator | }, 2026-04-09 01:16:31.425565 | orchestrator | "mgr": { 2026-04-09 01:16:31.425571 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:31.425577 | orchestrator | }, 2026-04-09 01:16:31.425583 | orchestrator | "osd": { 2026-04-09 01:16:31.425589 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-09 01:16:31.425595 | orchestrator | }, 2026-04-09 01:16:31.425601 | orchestrator | "mds": { 2026-04-09 01:16:31.425607 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:31.425613 | orchestrator | }, 2026-04-09 01:16:31.425618 | orchestrator | "rgw": { 2026-04-09 01:16:31.425624 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:31.425630 | orchestrator | }, 2026-04-09 01:16:31.425637 | orchestrator | "overall": { 2026-04-09 01:16:31.425643 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-09 01:16:31.425665 | orchestrator | } 2026-04-09 01:16:31.425671 | orchestrator | } 2026-04-09 01:16:31.469855 | orchestrator | 2026-04-09 01:16:31.469902 | orchestrator | # Ceph OSD tree 2026-04-09 01:16:31.469908 | orchestrator | 2026-04-09 01:16:31.469912 | orchestrator | + echo 2026-04-09 01:16:31.469916 | orchestrator | + echo '# Ceph OSD tree' 2026-04-09 01:16:31.469922 | orchestrator | + echo 2026-04-09 01:16:31.469928 | orchestrator | + ceph osd df tree 2026-04-09 01:16:31.963050 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-09 01:16:31.963154 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-04-09 01:16:31.963164 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-09 01:16:31.963170 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.04 1.19 201 up osd.0 2026-04-09 01:16:31.963187 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 980 MiB 907 MiB 1 KiB 74 MiB 19 GiB 4.79 0.81 189 up osd.5 2026-04-09 01:16:31.963194 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-04-09 01:16:31.963200 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.55 1.11 192 up osd.1 2026-04-09 01:16:31.963207 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 70 MiB 19 GiB 5.28 0.89 196 up osd.4 2026-04-09 01:16:31.963214 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-04-09 01:16:31.963220 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.61 0.95 192 up osd.2 2026-04-09 01:16:31.963224 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.22 1.05 200 up osd.3 2026-04-09 01:16:31.963228 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-04-09 01:16:31.963232 | orchestrator | MIN/MAX VAR: 0.81/1.19 STDDEV: 0.77 2026-04-09 01:16:32.008117 | orchestrator | 2026-04-09 01:16:32.008206 | orchestrator | # Ceph monitor status 2026-04-09 01:16:32.008215 | orchestrator | 2026-04-09 01:16:32.008223 | orchestrator | + echo 2026-04-09 01:16:32.008230 | orchestrator | + echo '# Ceph monitor status' 2026-04-09 01:16:32.008238 | orchestrator | + echo 2026-04-09 01:16:32.008245 | orchestrator | + ceph mon stat 2026-04-09 01:16:32.583943 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-09 01:16:32.626633 | orchestrator | 2026-04-09 01:16:32.626702 | orchestrator | # Ceph quorum status 2026-04-09 01:16:32.626708 | orchestrator | 2026-04-09 01:16:32.626713 | orchestrator | + echo 2026-04-09 01:16:32.626718 | orchestrator | + echo '# Ceph quorum status' 2026-04-09 01:16:32.626730 | orchestrator | + echo 2026-04-09 01:16:32.627725 | orchestrator | + ceph quorum_status 2026-04-09 01:16:32.627783 | orchestrator | + jq 2026-04-09 01:16:33.283867 | orchestrator | { 2026-04-09 01:16:33.283964 | orchestrator | "election_epoch": 4, 2026-04-09 01:16:33.283977 | orchestrator | "quorum": [ 2026-04-09 01:16:33.283984 | orchestrator | 0, 2026-04-09 01:16:33.283992 | orchestrator | 1, 2026-04-09 01:16:33.284047 | orchestrator | 2 2026-04-09 01:16:33.284055 | orchestrator | ], 2026-04-09 01:16:33.284062 | orchestrator | "quorum_names": [ 2026-04-09 01:16:33.284093 | orchestrator | "testbed-node-0", 2026-04-09 01:16:33.284102 | orchestrator | "testbed-node-1", 2026-04-09 01:16:33.284109 | orchestrator | "testbed-node-2" 2026-04-09 01:16:33.284116 | orchestrator | ], 2026-04-09 01:16:33.284123 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-09 01:16:33.284132 | orchestrator | "quorum_age": 1569, 2026-04-09 01:16:33.284139 | orchestrator | "features": { 2026-04-09 01:16:33.284170 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-09 01:16:33.284179 | orchestrator | "quorum_mon": [ 2026-04-09 01:16:33.284186 | orchestrator | "kraken", 2026-04-09 01:16:33.284193 | orchestrator | "luminous", 2026-04-09 01:16:33.284200 | orchestrator | "mimic", 2026-04-09 01:16:33.284208 | orchestrator | "osdmap-prune", 2026-04-09 01:16:33.284214 | orchestrator | "nautilus", 2026-04-09 01:16:33.284221 | orchestrator | "octopus", 2026-04-09 01:16:33.284228 | orchestrator | "pacific", 2026-04-09 01:16:33.284235 | orchestrator | "elector-pinging", 2026-04-09 01:16:33.284241 | orchestrator | "quincy", 2026-04-09 01:16:33.284248 | orchestrator | "reef" 2026-04-09 01:16:33.284254 | orchestrator | ] 2026-04-09 01:16:33.284261 | orchestrator | }, 2026-04-09 01:16:33.284268 | orchestrator | "monmap": { 2026-04-09 01:16:33.284274 | orchestrator | "epoch": 1, 2026-04-09 01:16:33.284281 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-09 01:16:33.284289 | orchestrator | "modified": "2026-04-09T00:50:11.054400Z", 2026-04-09 01:16:33.284297 | orchestrator | "created": "2026-04-09T00:50:11.054400Z", 2026-04-09 01:16:33.284318 | orchestrator | "min_mon_release": 18, 2026-04-09 01:16:33.284371 | orchestrator | "min_mon_release_name": "reef", 2026-04-09 01:16:33.284379 | orchestrator | "election_strategy": 1, 2026-04-09 01:16:33.284385 | orchestrator | "disallowed_leaders": "", 2026-04-09 01:16:33.284392 | orchestrator | "stretch_mode": false, 2026-04-09 01:16:33.284398 | orchestrator | "tiebreaker_mon": "", 2026-04-09 01:16:33.284405 | orchestrator | "removed_ranks": "", 2026-04-09 01:16:33.284413 | orchestrator | "features": { 2026-04-09 01:16:33.284420 | orchestrator | "persistent": [ 2026-04-09 01:16:33.284427 | orchestrator | "kraken", 2026-04-09 01:16:33.284434 | orchestrator | "luminous", 2026-04-09 01:16:33.284441 | orchestrator | "mimic", 2026-04-09 01:16:33.284447 | orchestrator | "osdmap-prune", 2026-04-09 01:16:33.284454 | orchestrator | "nautilus", 2026-04-09 01:16:33.284461 | orchestrator | "octopus", 2026-04-09 01:16:33.284469 | orchestrator | "pacific", 2026-04-09 01:16:33.284477 | orchestrator | "elector-pinging", 2026-04-09 01:16:33.284484 | orchestrator | "quincy", 2026-04-09 01:16:33.284491 | orchestrator | "reef" 2026-04-09 01:16:33.284498 | orchestrator | ], 2026-04-09 01:16:33.284506 | orchestrator | "optional": [] 2026-04-09 01:16:33.284513 | orchestrator | }, 2026-04-09 01:16:33.284520 | orchestrator | "mons": [ 2026-04-09 01:16:33.284527 | orchestrator | { 2026-04-09 01:16:33.284534 | orchestrator | "rank": 0, 2026-04-09 01:16:33.284541 | orchestrator | "name": "testbed-node-0", 2026-04-09 01:16:33.284549 | orchestrator | "public_addrs": { 2026-04-09 01:16:33.284555 | orchestrator | "addrvec": [ 2026-04-09 01:16:33.284562 | orchestrator | { 2026-04-09 01:16:33.284569 | orchestrator | "type": "v2", 2026-04-09 01:16:33.284577 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-09 01:16:33.284584 | orchestrator | "nonce": 0 2026-04-09 01:16:33.284591 | orchestrator | }, 2026-04-09 01:16:33.284598 | orchestrator | { 2026-04-09 01:16:33.284605 | orchestrator | "type": "v1", 2026-04-09 01:16:33.284613 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-09 01:16:33.284620 | orchestrator | "nonce": 0 2026-04-09 01:16:33.284627 | orchestrator | } 2026-04-09 01:16:33.284634 | orchestrator | ] 2026-04-09 01:16:33.284640 | orchestrator | }, 2026-04-09 01:16:33.284646 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-09 01:16:33.284653 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-09 01:16:33.284659 | orchestrator | "priority": 0, 2026-04-09 01:16:33.284666 | orchestrator | "weight": 0, 2026-04-09 01:16:33.284672 | orchestrator | "crush_location": "{}" 2026-04-09 01:16:33.284679 | orchestrator | }, 2026-04-09 01:16:33.284685 | orchestrator | { 2026-04-09 01:16:33.284692 | orchestrator | "rank": 1, 2026-04-09 01:16:33.284698 | orchestrator | "name": "testbed-node-1", 2026-04-09 01:16:33.284705 | orchestrator | "public_addrs": { 2026-04-09 01:16:33.284711 | orchestrator | "addrvec": [ 2026-04-09 01:16:33.284717 | orchestrator | { 2026-04-09 01:16:33.284724 | orchestrator | "type": "v2", 2026-04-09 01:16:33.284730 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-09 01:16:33.284737 | orchestrator | "nonce": 0 2026-04-09 01:16:33.284743 | orchestrator | }, 2026-04-09 01:16:33.284749 | orchestrator | { 2026-04-09 01:16:33.284755 | orchestrator | "type": "v1", 2026-04-09 01:16:33.284762 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-09 01:16:33.284768 | orchestrator | "nonce": 0 2026-04-09 01:16:33.284785 | orchestrator | } 2026-04-09 01:16:33.284790 | orchestrator | ] 2026-04-09 01:16:33.284797 | orchestrator | }, 2026-04-09 01:16:33.284803 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-09 01:16:33.284810 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-09 01:16:33.284816 | orchestrator | "priority": 0, 2026-04-09 01:16:33.284822 | orchestrator | "weight": 0, 2026-04-09 01:16:33.284828 | orchestrator | "crush_location": "{}" 2026-04-09 01:16:33.284834 | orchestrator | }, 2026-04-09 01:16:33.284840 | orchestrator | { 2026-04-09 01:16:33.284846 | orchestrator | "rank": 2, 2026-04-09 01:16:33.284852 | orchestrator | "name": "testbed-node-2", 2026-04-09 01:16:33.284858 | orchestrator | "public_addrs": { 2026-04-09 01:16:33.284864 | orchestrator | "addrvec": [ 2026-04-09 01:16:33.284870 | orchestrator | { 2026-04-09 01:16:33.284876 | orchestrator | "type": "v2", 2026-04-09 01:16:33.284881 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-09 01:16:33.284887 | orchestrator | "nonce": 0 2026-04-09 01:16:33.284893 | orchestrator | }, 2026-04-09 01:16:33.284899 | orchestrator | { 2026-04-09 01:16:33.284906 | orchestrator | "type": "v1", 2026-04-09 01:16:33.284912 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-09 01:16:33.284918 | orchestrator | "nonce": 0 2026-04-09 01:16:33.284924 | orchestrator | } 2026-04-09 01:16:33.284930 | orchestrator | ] 2026-04-09 01:16:33.284936 | orchestrator | }, 2026-04-09 01:16:33.284943 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-09 01:16:33.284953 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-09 01:16:33.284959 | orchestrator | "priority": 0, 2026-04-09 01:16:33.284965 | orchestrator | "weight": 0, 2026-04-09 01:16:33.284972 | orchestrator | "crush_location": "{}" 2026-04-09 01:16:33.284977 | orchestrator | } 2026-04-09 01:16:33.284983 | orchestrator | ] 2026-04-09 01:16:33.284989 | orchestrator | } 2026-04-09 01:16:33.284995 | orchestrator | } 2026-04-09 01:16:33.285133 | orchestrator | 2026-04-09 01:16:33.285145 | orchestrator | # Ceph free space status 2026-04-09 01:16:33.285151 | orchestrator | 2026-04-09 01:16:33.285157 | orchestrator | + echo 2026-04-09 01:16:33.285164 | orchestrator | + echo '# Ceph free space status' 2026-04-09 01:16:33.285170 | orchestrator | + echo 2026-04-09 01:16:33.285176 | orchestrator | + ceph df 2026-04-09 01:16:33.851972 | orchestrator | --- RAW STORAGE --- 2026-04-09 01:16:33.852064 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-09 01:16:33.852076 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-09 01:16:33.852083 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-09 01:16:33.852087 | orchestrator | 2026-04-09 01:16:33.852093 | orchestrator | --- POOLS --- 2026-04-09 01:16:33.852097 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-09 01:16:33.852103 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-09 01:16:33.852107 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:33.852111 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-09 01:16:33.852115 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:33.852119 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:33.852122 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-09 01:16:33.852126 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-09 01:16:33.852130 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:33.852134 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-09 01:16:33.852137 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:33.852156 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:33.852160 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-04-09 01:16:33.852164 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:33.852167 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:33.899744 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:16:33.953291 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:16:33.953406 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:16:33.953417 | orchestrator | + osism apply facts 2026-04-09 01:16:45.269694 | orchestrator | 2026-04-09 01:16:45 | INFO  | Prepare task for execution of facts. 2026-04-09 01:16:45.341050 | orchestrator | 2026-04-09 01:16:45 | INFO  | Task b20a5c9b-72f0-4ee3-a468-17c447e71124 (facts) was prepared for execution. 2026-04-09 01:16:45.342184 | orchestrator | 2026-04-09 01:16:45 | INFO  | It takes a moment until task b20a5c9b-72f0-4ee3-a468-17c447e71124 (facts) has been started and output is visible here. 2026-04-09 01:16:58.526278 | orchestrator | 2026-04-09 01:16:58.526368 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 01:16:58.526380 | orchestrator | 2026-04-09 01:16:58.526464 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 01:16:58.526476 | orchestrator | Thursday 09 April 2026 01:16:48 +0000 (0:00:00.348) 0:00:00.348 ******** 2026-04-09 01:16:58.526482 | orchestrator | ok: [testbed-manager] 2026-04-09 01:16:58.526490 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:16:58.526496 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:16:58.526502 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:16:58.526508 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:16:58.526514 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:16:58.526520 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:16:58.526526 | orchestrator | 2026-04-09 01:16:58.526532 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 01:16:58.526538 | orchestrator | Thursday 09 April 2026 01:16:50 +0000 (0:00:01.323) 0:00:01.671 ******** 2026-04-09 01:16:58.526544 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:16:58.526552 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:16:58.526558 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:16:58.526564 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:16:58.526571 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:16:58.526578 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:16:58.526582 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:16:58.526586 | orchestrator | 2026-04-09 01:16:58.526590 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 01:16:58.526594 | orchestrator | 2026-04-09 01:16:58.526598 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 01:16:58.526602 | orchestrator | Thursday 09 April 2026 01:16:51 +0000 (0:00:01.257) 0:00:02.929 ******** 2026-04-09 01:16:58.526605 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:16:58.526609 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:16:58.526613 | orchestrator | ok: [testbed-manager] 2026-04-09 01:16:58.526617 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:16:58.526621 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:16:58.526625 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:16:58.526628 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:16:58.526632 | orchestrator | 2026-04-09 01:16:58.526636 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 01:16:58.526640 | orchestrator | 2026-04-09 01:16:58.526644 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 01:16:58.526648 | orchestrator | Thursday 09 April 2026 01:16:57 +0000 (0:00:06.228) 0:00:09.157 ******** 2026-04-09 01:16:58.526652 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:16:58.526656 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:16:58.526660 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:16:58.526663 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:16:58.526667 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:16:58.526671 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:16:58.526675 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:16:58.526678 | orchestrator | 2026-04-09 01:16:58.526682 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:16:58.526707 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:58.526713 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:58.526717 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:58.526721 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:58.526724 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:58.526728 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:58.526732 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:58.526736 | orchestrator | 2026-04-09 01:16:58.526739 | orchestrator | 2026-04-09 01:16:58.526743 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:16:58.526747 | orchestrator | Thursday 09 April 2026 01:16:58 +0000 (0:00:00.722) 0:00:09.880 ******** 2026-04-09 01:16:58.526751 | orchestrator | =============================================================================== 2026-04-09 01:16:58.526754 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.23s 2026-04-09 01:16:58.526759 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2026-04-09 01:16:58.526763 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2026-04-09 01:16:58.526767 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2026-04-09 01:16:58.704376 | orchestrator | + osism validate ceph-mons 2026-04-09 01:17:28.984575 | orchestrator | 2026-04-09 01:17:28.984672 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-09 01:17:28.984683 | orchestrator | 2026-04-09 01:17:28.984688 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 01:17:28.984692 | orchestrator | Thursday 09 April 2026 01:17:13 +0000 (0:00:00.389) 0:00:00.389 ******** 2026-04-09 01:17:28.984697 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:28.984701 | orchestrator | 2026-04-09 01:17:28.984705 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 01:17:28.984710 | orchestrator | Thursday 09 April 2026 01:17:14 +0000 (0:00:00.930) 0:00:01.319 ******** 2026-04-09 01:17:28.984714 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:28.984718 | orchestrator | 2026-04-09 01:17:28.984722 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 01:17:28.984727 | orchestrator | Thursday 09 April 2026 01:17:15 +0000 (0:00:00.612) 0:00:01.931 ******** 2026-04-09 01:17:28.984731 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.984736 | orchestrator | 2026-04-09 01:17:28.984740 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 01:17:28.984744 | orchestrator | Thursday 09 April 2026 01:17:15 +0000 (0:00:00.111) 0:00:02.042 ******** 2026-04-09 01:17:28.984748 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.984752 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:28.984756 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:28.984759 | orchestrator | 2026-04-09 01:17:28.984763 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 01:17:28.984767 | orchestrator | Thursday 09 April 2026 01:17:15 +0000 (0:00:00.265) 0:00:02.308 ******** 2026-04-09 01:17:28.984771 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:28.984789 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:28.984794 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.984812 | orchestrator | 2026-04-09 01:17:28.984816 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 01:17:28.984820 | orchestrator | Thursday 09 April 2026 01:17:16 +0000 (0:00:01.492) 0:00:03.800 ******** 2026-04-09 01:17:28.984824 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.984828 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:17:28.984832 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:17:28.984836 | orchestrator | 2026-04-09 01:17:28.984840 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 01:17:28.984844 | orchestrator | Thursday 09 April 2026 01:17:17 +0000 (0:00:00.253) 0:00:04.054 ******** 2026-04-09 01:17:28.984848 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.984851 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:28.984855 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:28.984859 | orchestrator | 2026-04-09 01:17:28.984863 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:17:28.984867 | orchestrator | Thursday 09 April 2026 01:17:17 +0000 (0:00:00.288) 0:00:04.343 ******** 2026-04-09 01:17:28.984871 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.984875 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:28.984879 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:28.984882 | orchestrator | 2026-04-09 01:17:28.984886 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-09 01:17:28.984890 | orchestrator | Thursday 09 April 2026 01:17:17 +0000 (0:00:00.313) 0:00:04.656 ******** 2026-04-09 01:17:28.984894 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.984898 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:17:28.984902 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:17:28.984906 | orchestrator | 2026-04-09 01:17:28.984910 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-09 01:17:28.984914 | orchestrator | Thursday 09 April 2026 01:17:18 +0000 (0:00:00.439) 0:00:05.096 ******** 2026-04-09 01:17:28.984917 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.984921 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:28.984925 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:28.984930 | orchestrator | 2026-04-09 01:17:28.984933 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:17:28.984938 | orchestrator | Thursday 09 April 2026 01:17:18 +0000 (0:00:00.285) 0:00:05.382 ******** 2026-04-09 01:17:28.984944 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.984951 | orchestrator | 2026-04-09 01:17:28.984960 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:17:28.984967 | orchestrator | Thursday 09 April 2026 01:17:18 +0000 (0:00:00.234) 0:00:05.616 ******** 2026-04-09 01:17:28.984972 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.984979 | orchestrator | 2026-04-09 01:17:28.984984 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:17:28.984990 | orchestrator | Thursday 09 April 2026 01:17:18 +0000 (0:00:00.231) 0:00:05.847 ******** 2026-04-09 01:17:28.985027 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985034 | orchestrator | 2026-04-09 01:17:28.985040 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:28.985047 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:00.241) 0:00:06.089 ******** 2026-04-09 01:17:28.985052 | orchestrator | 2026-04-09 01:17:28.985058 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:28.985065 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:00.069) 0:00:06.158 ******** 2026-04-09 01:17:28.985071 | orchestrator | 2026-04-09 01:17:28.985076 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:28.985084 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:00.079) 0:00:06.238 ******** 2026-04-09 01:17:28.985091 | orchestrator | 2026-04-09 01:17:28.985098 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:17:28.985116 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:00.209) 0:00:06.447 ******** 2026-04-09 01:17:28.985122 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985129 | orchestrator | 2026-04-09 01:17:28.985135 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 01:17:28.985142 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:00.253) 0:00:06.700 ******** 2026-04-09 01:17:28.985148 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985155 | orchestrator | 2026-04-09 01:17:28.985176 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-09 01:17:28.985181 | orchestrator | Thursday 09 April 2026 01:17:20 +0000 (0:00:00.243) 0:00:06.943 ******** 2026-04-09 01:17:28.985185 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985190 | orchestrator | 2026-04-09 01:17:28.985194 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-09 01:17:28.985198 | orchestrator | Thursday 09 April 2026 01:17:20 +0000 (0:00:00.121) 0:00:07.065 ******** 2026-04-09 01:17:28.985203 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:17:28.985207 | orchestrator | 2026-04-09 01:17:28.985212 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-09 01:17:28.985216 | orchestrator | Thursday 09 April 2026 01:17:21 +0000 (0:00:01.706) 0:00:08.772 ******** 2026-04-09 01:17:28.985221 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985225 | orchestrator | 2026-04-09 01:17:28.985230 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-09 01:17:28.985235 | orchestrator | Thursday 09 April 2026 01:17:22 +0000 (0:00:00.327) 0:00:09.100 ******** 2026-04-09 01:17:28.985239 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985243 | orchestrator | 2026-04-09 01:17:28.985248 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-09 01:17:28.985252 | orchestrator | Thursday 09 April 2026 01:17:22 +0000 (0:00:00.118) 0:00:09.218 ******** 2026-04-09 01:17:28.985257 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985261 | orchestrator | 2026-04-09 01:17:28.985265 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-09 01:17:28.985270 | orchestrator | Thursday 09 April 2026 01:17:22 +0000 (0:00:00.293) 0:00:09.512 ******** 2026-04-09 01:17:28.985274 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985278 | orchestrator | 2026-04-09 01:17:28.985283 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-09 01:17:28.985287 | orchestrator | Thursday 09 April 2026 01:17:22 +0000 (0:00:00.283) 0:00:09.796 ******** 2026-04-09 01:17:28.985292 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985296 | orchestrator | 2026-04-09 01:17:28.985300 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-09 01:17:28.985305 | orchestrator | Thursday 09 April 2026 01:17:23 +0000 (0:00:00.108) 0:00:09.905 ******** 2026-04-09 01:17:28.985309 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985316 | orchestrator | 2026-04-09 01:17:28.985322 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-09 01:17:28.985331 | orchestrator | Thursday 09 April 2026 01:17:23 +0000 (0:00:00.135) 0:00:10.040 ******** 2026-04-09 01:17:28.985340 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985347 | orchestrator | 2026-04-09 01:17:28.985353 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-09 01:17:28.985359 | orchestrator | Thursday 09 April 2026 01:17:23 +0000 (0:00:00.280) 0:00:10.321 ******** 2026-04-09 01:17:28.985365 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:17:28.985371 | orchestrator | 2026-04-09 01:17:28.985377 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-09 01:17:28.985384 | orchestrator | Thursday 09 April 2026 01:17:25 +0000 (0:00:01.604) 0:00:11.926 ******** 2026-04-09 01:17:28.985390 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985396 | orchestrator | 2026-04-09 01:17:28.985403 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-09 01:17:28.985408 | orchestrator | Thursday 09 April 2026 01:17:25 +0000 (0:00:00.298) 0:00:12.224 ******** 2026-04-09 01:17:28.985420 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985426 | orchestrator | 2026-04-09 01:17:28.985432 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-09 01:17:28.985438 | orchestrator | Thursday 09 April 2026 01:17:25 +0000 (0:00:00.128) 0:00:12.352 ******** 2026-04-09 01:17:28.985444 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:28.985450 | orchestrator | 2026-04-09 01:17:28.985456 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-09 01:17:28.985578 | orchestrator | Thursday 09 April 2026 01:17:25 +0000 (0:00:00.134) 0:00:12.486 ******** 2026-04-09 01:17:28.985587 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985591 | orchestrator | 2026-04-09 01:17:28.985595 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-09 01:17:28.985601 | orchestrator | Thursday 09 April 2026 01:17:25 +0000 (0:00:00.150) 0:00:12.636 ******** 2026-04-09 01:17:28.985607 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985617 | orchestrator | 2026-04-09 01:17:28.985625 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 01:17:28.985632 | orchestrator | Thursday 09 April 2026 01:17:25 +0000 (0:00:00.134) 0:00:12.771 ******** 2026-04-09 01:17:28.985637 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:28.985642 | orchestrator | 2026-04-09 01:17:28.985648 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 01:17:28.985654 | orchestrator | Thursday 09 April 2026 01:17:26 +0000 (0:00:00.233) 0:00:13.004 ******** 2026-04-09 01:17:28.985660 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:28.985666 | orchestrator | 2026-04-09 01:17:28.985672 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:17:28.985678 | orchestrator | Thursday 09 April 2026 01:17:26 +0000 (0:00:00.218) 0:00:13.223 ******** 2026-04-09 01:17:28.985684 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:28.985691 | orchestrator | 2026-04-09 01:17:28.985697 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:17:28.985701 | orchestrator | Thursday 09 April 2026 01:17:28 +0000 (0:00:01.742) 0:00:14.966 ******** 2026-04-09 01:17:28.985705 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:28.985709 | orchestrator | 2026-04-09 01:17:28.985713 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:17:28.985717 | orchestrator | Thursday 09 April 2026 01:17:28 +0000 (0:00:00.258) 0:00:15.224 ******** 2026-04-09 01:17:28.985721 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:28.985725 | orchestrator | 2026-04-09 01:17:28.985736 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.174200 | orchestrator | Thursday 09 April 2026 01:17:28 +0000 (0:00:00.623) 0:00:15.848 ******** 2026-04-09 01:17:31.175146 | orchestrator | 2026-04-09 01:17:31.175197 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.175208 | orchestrator | Thursday 09 April 2026 01:17:29 +0000 (0:00:00.071) 0:00:15.919 ******** 2026-04-09 01:17:31.175214 | orchestrator | 2026-04-09 01:17:31.175221 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.175227 | orchestrator | Thursday 09 April 2026 01:17:29 +0000 (0:00:00.069) 0:00:15.989 ******** 2026-04-09 01:17:31.175233 | orchestrator | 2026-04-09 01:17:31.175240 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 01:17:31.175246 | orchestrator | Thursday 09 April 2026 01:17:29 +0000 (0:00:00.085) 0:00:16.074 ******** 2026-04-09 01:17:31.175255 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.175260 | orchestrator | 2026-04-09 01:17:31.175264 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:17:31.175268 | orchestrator | Thursday 09 April 2026 01:17:30 +0000 (0:00:01.216) 0:00:17.291 ******** 2026-04-09 01:17:31.175291 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 01:17:31.175296 | orchestrator |  "msg": [ 2026-04-09 01:17:31.175301 | orchestrator |  "Validator run completed.", 2026-04-09 01:17:31.175306 | orchestrator |  "You can find the report file here:", 2026-04-09 01:17:31.175310 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-09T01:17:14+00:00-report.json", 2026-04-09 01:17:31.175315 | orchestrator |  "on the following host:", 2026-04-09 01:17:31.175319 | orchestrator |  "testbed-manager" 2026-04-09 01:17:31.175323 | orchestrator |  ] 2026-04-09 01:17:31.175327 | orchestrator | } 2026-04-09 01:17:31.175331 | orchestrator | 2026-04-09 01:17:31.175335 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:17:31.175341 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:17:31.175346 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:17:31.175351 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:17:31.175355 | orchestrator | 2026-04-09 01:17:31.175358 | orchestrator | 2026-04-09 01:17:31.175362 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:17:31.175366 | orchestrator | Thursday 09 April 2026 01:17:30 +0000 (0:00:00.465) 0:00:17.757 ******** 2026-04-09 01:17:31.175369 | orchestrator | =============================================================================== 2026-04-09 01:17:31.175373 | orchestrator | Aggregate test results step one ----------------------------------------- 1.74s 2026-04-09 01:17:31.175377 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.71s 2026-04-09 01:17:31.175381 | orchestrator | Gather status data ------------------------------------------------------ 1.60s 2026-04-09 01:17:31.175384 | orchestrator | Get container info ------------------------------------------------------ 1.49s 2026-04-09 01:17:31.175388 | orchestrator | Write report file ------------------------------------------------------- 1.22s 2026-04-09 01:17:31.175392 | orchestrator | Get timestamp for report file ------------------------------------------- 0.93s 2026-04-09 01:17:31.175395 | orchestrator | Aggregate test results step three --------------------------------------- 0.62s 2026-04-09 01:17:31.175399 | orchestrator | Create report output directory ------------------------------------------ 0.61s 2026-04-09 01:17:31.175403 | orchestrator | Print report file information ------------------------------------------- 0.47s 2026-04-09 01:17:31.175407 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.44s 2026-04-09 01:17:31.175411 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2026-04-09 01:17:31.175415 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2026-04-09 01:17:31.175435 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-04-09 01:17:31.175440 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-04-09 01:17:31.175448 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.29s 2026-04-09 01:17:31.175457 | orchestrator | Set test result to passed if container is existing ---------------------- 0.29s 2026-04-09 01:17:31.175463 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2026-04-09 01:17:31.175565 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.28s 2026-04-09 01:17:31.175572 | orchestrator | Prepare status test vars ------------------------------------------------ 0.28s 2026-04-09 01:17:31.175578 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-04-09 01:17:31.401939 | orchestrator | + osism validate ceph-mgrs 2026-04-09 01:18:00.077946 | orchestrator | 2026-04-09 01:18:00.078096 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-09 01:18:00.078121 | orchestrator | 2026-04-09 01:18:00.078132 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 01:18:00.078137 | orchestrator | Thursday 09 April 2026 01:17:46 +0000 (0:00:00.504) 0:00:00.504 ******** 2026-04-09 01:18:00.078141 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:00.078145 | orchestrator | 2026-04-09 01:18:00.078149 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 01:18:00.078153 | orchestrator | Thursday 09 April 2026 01:17:47 +0000 (0:00:00.972) 0:00:01.477 ******** 2026-04-09 01:18:00.078157 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:00.078161 | orchestrator | 2026-04-09 01:18:00.078165 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 01:18:00.078168 | orchestrator | Thursday 09 April 2026 01:17:47 +0000 (0:00:00.677) 0:00:02.155 ******** 2026-04-09 01:18:00.078173 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078178 | orchestrator | 2026-04-09 01:18:00.078182 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 01:18:00.078186 | orchestrator | Thursday 09 April 2026 01:17:48 +0000 (0:00:00.121) 0:00:02.276 ******** 2026-04-09 01:18:00.078189 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078193 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:18:00.078197 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:18:00.078201 | orchestrator | 2026-04-09 01:18:00.078204 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 01:18:00.078208 | orchestrator | Thursday 09 April 2026 01:17:48 +0000 (0:00:00.273) 0:00:02.549 ******** 2026-04-09 01:18:00.078212 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:18:00.078216 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078220 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:18:00.078224 | orchestrator | 2026-04-09 01:18:00.078227 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 01:18:00.078231 | orchestrator | Thursday 09 April 2026 01:17:49 +0000 (0:00:01.544) 0:00:04.094 ******** 2026-04-09 01:18:00.078235 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078239 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:18:00.078243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:18:00.078247 | orchestrator | 2026-04-09 01:18:00.078251 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 01:18:00.078254 | orchestrator | Thursday 09 April 2026 01:17:50 +0000 (0:00:00.326) 0:00:04.421 ******** 2026-04-09 01:18:00.078258 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078262 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:18:00.078266 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:18:00.078269 | orchestrator | 2026-04-09 01:18:00.078273 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:00.078277 | orchestrator | Thursday 09 April 2026 01:17:50 +0000 (0:00:00.274) 0:00:04.695 ******** 2026-04-09 01:18:00.078281 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078284 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:18:00.078288 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:18:00.078292 | orchestrator | 2026-04-09 01:18:00.078296 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-09 01:18:00.078299 | orchestrator | Thursday 09 April 2026 01:17:50 +0000 (0:00:00.285) 0:00:04.981 ******** 2026-04-09 01:18:00.078303 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078307 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:18:00.078311 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:18:00.078315 | orchestrator | 2026-04-09 01:18:00.078319 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-09 01:18:00.078323 | orchestrator | Thursday 09 April 2026 01:17:51 +0000 (0:00:00.444) 0:00:05.426 ******** 2026-04-09 01:18:00.078326 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078330 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:18:00.078338 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:18:00.078342 | orchestrator | 2026-04-09 01:18:00.078346 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:18:00.078350 | orchestrator | Thursday 09 April 2026 01:17:51 +0000 (0:00:00.298) 0:00:05.724 ******** 2026-04-09 01:18:00.078353 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078358 | orchestrator | 2026-04-09 01:18:00.078365 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:18:00.078370 | orchestrator | Thursday 09 April 2026 01:17:51 +0000 (0:00:00.252) 0:00:05.977 ******** 2026-04-09 01:18:00.078379 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078388 | orchestrator | 2026-04-09 01:18:00.078393 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:18:00.078399 | orchestrator | Thursday 09 April 2026 01:17:51 +0000 (0:00:00.244) 0:00:06.221 ******** 2026-04-09 01:18:00.078405 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078410 | orchestrator | 2026-04-09 01:18:00.078416 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:00.078421 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.244) 0:00:06.466 ******** 2026-04-09 01:18:00.078427 | orchestrator | 2026-04-09 01:18:00.078433 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:00.078438 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.068) 0:00:06.534 ******** 2026-04-09 01:18:00.078445 | orchestrator | 2026-04-09 01:18:00.078451 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:00.078457 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.067) 0:00:06.602 ******** 2026-04-09 01:18:00.078463 | orchestrator | 2026-04-09 01:18:00.078469 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:18:00.078475 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.207) 0:00:06.810 ******** 2026-04-09 01:18:00.078481 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078486 | orchestrator | 2026-04-09 01:18:00.078491 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 01:18:00.078496 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.257) 0:00:07.067 ******** 2026-04-09 01:18:00.078500 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078505 | orchestrator | 2026-04-09 01:18:00.078524 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-09 01:18:00.078529 | orchestrator | Thursday 09 April 2026 01:17:53 +0000 (0:00:00.240) 0:00:07.308 ******** 2026-04-09 01:18:00.078620 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078624 | orchestrator | 2026-04-09 01:18:00.078628 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-09 01:18:00.078632 | orchestrator | Thursday 09 April 2026 01:17:53 +0000 (0:00:00.127) 0:00:07.435 ******** 2026-04-09 01:18:00.078636 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:18:00.078639 | orchestrator | 2026-04-09 01:18:00.078643 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-09 01:18:00.078647 | orchestrator | Thursday 09 April 2026 01:17:54 +0000 (0:00:01.701) 0:00:09.137 ******** 2026-04-09 01:18:00.078651 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078654 | orchestrator | 2026-04-09 01:18:00.078658 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-09 01:18:00.078662 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.241) 0:00:09.378 ******** 2026-04-09 01:18:00.078665 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078669 | orchestrator | 2026-04-09 01:18:00.078673 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-09 01:18:00.078677 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.302) 0:00:09.680 ******** 2026-04-09 01:18:00.078680 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078684 | orchestrator | 2026-04-09 01:18:00.078688 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-09 01:18:00.078697 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.135) 0:00:09.816 ******** 2026-04-09 01:18:00.078700 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:18:00.078704 | orchestrator | 2026-04-09 01:18:00.078708 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 01:18:00.078712 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.137) 0:00:09.954 ******** 2026-04-09 01:18:00.078715 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:00.078719 | orchestrator | 2026-04-09 01:18:00.078723 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 01:18:00.078727 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.259) 0:00:10.213 ******** 2026-04-09 01:18:00.078730 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:18:00.078734 | orchestrator | 2026-04-09 01:18:00.078738 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:18:00.078741 | orchestrator | Thursday 09 April 2026 01:17:56 +0000 (0:00:00.259) 0:00:10.473 ******** 2026-04-09 01:18:00.078745 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:00.078749 | orchestrator | 2026-04-09 01:18:00.078753 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:18:00.078756 | orchestrator | Thursday 09 April 2026 01:17:57 +0000 (0:00:01.469) 0:00:11.943 ******** 2026-04-09 01:18:00.078760 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:00.078764 | orchestrator | 2026-04-09 01:18:00.078768 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:18:00.078771 | orchestrator | Thursday 09 April 2026 01:17:57 +0000 (0:00:00.265) 0:00:12.208 ******** 2026-04-09 01:18:00.078775 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:00.078779 | orchestrator | 2026-04-09 01:18:00.078783 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:00.078787 | orchestrator | Thursday 09 April 2026 01:17:58 +0000 (0:00:00.278) 0:00:12.487 ******** 2026-04-09 01:18:00.078790 | orchestrator | 2026-04-09 01:18:00.078794 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:00.078798 | orchestrator | Thursday 09 April 2026 01:17:58 +0000 (0:00:00.068) 0:00:12.555 ******** 2026-04-09 01:18:00.078801 | orchestrator | 2026-04-09 01:18:00.078805 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:00.078809 | orchestrator | Thursday 09 April 2026 01:17:58 +0000 (0:00:00.074) 0:00:12.630 ******** 2026-04-09 01:18:00.078813 | orchestrator | 2026-04-09 01:18:00.078816 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 01:18:00.078820 | orchestrator | Thursday 09 April 2026 01:17:58 +0000 (0:00:00.072) 0:00:12.702 ******** 2026-04-09 01:18:00.078824 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:00.078828 | orchestrator | 2026-04-09 01:18:00.078832 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:18:00.078835 | orchestrator | Thursday 09 April 2026 01:17:59 +0000 (0:00:01.214) 0:00:13.917 ******** 2026-04-09 01:18:00.078839 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 01:18:00.078843 | orchestrator |  "msg": [ 2026-04-09 01:18:00.078847 | orchestrator |  "Validator run completed.", 2026-04-09 01:18:00.078851 | orchestrator |  "You can find the report file here:", 2026-04-09 01:18:00.078855 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-09T01:17:47+00:00-report.json", 2026-04-09 01:18:00.078861 | orchestrator |  "on the following host:", 2026-04-09 01:18:00.078865 | orchestrator |  "testbed-manager" 2026-04-09 01:18:00.078869 | orchestrator |  ] 2026-04-09 01:18:00.078873 | orchestrator | } 2026-04-09 01:18:00.078877 | orchestrator | 2026-04-09 01:18:00.078881 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:18:00.078886 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:18:00.078893 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:18:00.078901 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:18:00.410526 | orchestrator | 2026-04-09 01:18:00.410630 | orchestrator | 2026-04-09 01:18:00.410639 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:18:00.410645 | orchestrator | Thursday 09 April 2026 01:18:00 +0000 (0:00:00.403) 0:00:14.320 ******** 2026-04-09 01:18:00.410650 | orchestrator | =============================================================================== 2026-04-09 01:18:00.410654 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.70s 2026-04-09 01:18:00.410658 | orchestrator | Get container info ------------------------------------------------------ 1.54s 2026-04-09 01:18:00.410662 | orchestrator | Aggregate test results step one ----------------------------------------- 1.47s 2026-04-09 01:18:00.410666 | orchestrator | Write report file ------------------------------------------------------- 1.21s 2026-04-09 01:18:00.410670 | orchestrator | Get timestamp for report file ------------------------------------------- 0.97s 2026-04-09 01:18:00.410674 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2026-04-09 01:18:00.410678 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.44s 2026-04-09 01:18:00.410682 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-09 01:18:00.410686 | orchestrator | Flush handlers ---------------------------------------------------------- 0.34s 2026-04-09 01:18:00.410690 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-04-09 01:18:00.410693 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.30s 2026-04-09 01:18:00.410697 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2026-04-09 01:18:00.410701 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-04-09 01:18:00.410704 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-04-09 01:18:00.410708 | orchestrator | Set test result to passed if container is existing ---------------------- 0.27s 2026-04-09 01:18:00.410712 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-04-09 01:18:00.410716 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-04-09 01:18:00.410719 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-04-09 01:18:00.410723 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2026-04-09 01:18:00.410727 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-04-09 01:18:00.613526 | orchestrator | + osism validate ceph-osds 2026-04-09 01:18:19.333734 | orchestrator | 2026-04-09 01:18:19.333836 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-09 01:18:19.333844 | orchestrator | 2026-04-09 01:18:19.333849 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 01:18:19.333854 | orchestrator | Thursday 09 April 2026 01:18:15 +0000 (0:00:00.495) 0:00:00.495 ******** 2026-04-09 01:18:19.333860 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:19.333864 | orchestrator | 2026-04-09 01:18:19.333868 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 01:18:19.333872 | orchestrator | Thursday 09 April 2026 01:18:16 +0000 (0:00:00.982) 0:00:01.478 ******** 2026-04-09 01:18:19.333877 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:19.333881 | orchestrator | 2026-04-09 01:18:19.333885 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 01:18:19.333905 | orchestrator | Thursday 09 April 2026 01:18:16 +0000 (0:00:00.237) 0:00:01.716 ******** 2026-04-09 01:18:19.333926 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:19.333930 | orchestrator | 2026-04-09 01:18:19.333934 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 01:18:19.333938 | orchestrator | Thursday 09 April 2026 01:18:17 +0000 (0:00:00.680) 0:00:02.396 ******** 2026-04-09 01:18:19.333942 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:19.333947 | orchestrator | 2026-04-09 01:18:19.333951 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 01:18:19.333956 | orchestrator | Thursday 09 April 2026 01:18:17 +0000 (0:00:00.119) 0:00:02.515 ******** 2026-04-09 01:18:19.333963 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:19.333970 | orchestrator | 2026-04-09 01:18:19.333976 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 01:18:19.333982 | orchestrator | Thursday 09 April 2026 01:18:17 +0000 (0:00:00.130) 0:00:02.646 ******** 2026-04-09 01:18:19.333988 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:19.333994 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:19.334000 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:19.334006 | orchestrator | 2026-04-09 01:18:19.334055 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 01:18:19.334060 | orchestrator | Thursday 09 April 2026 01:18:17 +0000 (0:00:00.444) 0:00:03.090 ******** 2026-04-09 01:18:19.334064 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:19.334068 | orchestrator | 2026-04-09 01:18:19.334072 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 01:18:19.334076 | orchestrator | Thursday 09 April 2026 01:18:18 +0000 (0:00:00.198) 0:00:03.289 ******** 2026-04-09 01:18:19.334080 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:19.334084 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:19.334088 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:19.334092 | orchestrator | 2026-04-09 01:18:19.334096 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-09 01:18:19.334099 | orchestrator | Thursday 09 April 2026 01:18:18 +0000 (0:00:00.300) 0:00:03.589 ******** 2026-04-09 01:18:19.334103 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:19.334107 | orchestrator | 2026-04-09 01:18:19.334111 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:19.334115 | orchestrator | Thursday 09 April 2026 01:18:18 +0000 (0:00:00.321) 0:00:03.911 ******** 2026-04-09 01:18:19.334118 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:19.334122 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:19.334126 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:19.334134 | orchestrator | 2026-04-09 01:18:19.334138 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-09 01:18:19.334142 | orchestrator | Thursday 09 April 2026 01:18:19 +0000 (0:00:00.278) 0:00:04.190 ******** 2026-04-09 01:18:19.334148 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f41301e6ca810bc6d9cf31318648cdea969989ab6f56d587454bd9bddf3d900c', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-09 01:18:19.334154 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c71add95ba02660350d74aabc2f2c15923688e567ff37fd92e5410f25c6f96aa', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:18:19.334161 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5b8d249e2f1c95945265337f3c204a3d2bbb850571126dc9edadb85e3a069289', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:18:19.334166 | orchestrator | skipping: [testbed-node-3] => (item={'id': '933023fd46958a2fde7645aedf373148c2b8a3359ac75c09204ca9c69e85539a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-09 01:18:19.334181 | orchestrator | skipping: [testbed-node-3] => (item={'id': '01b93b1f3eb01868b091fec5a54580874116a1e845c90e3c20ebe9d1823917fb', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-09 01:18:19.334200 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd639625a2d3763232bd43b2a124cf292b31ed3dec2671be67ca75d8822c3edea', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-09 01:18:19.334205 | orchestrator | skipping: [testbed-node-3] => (item={'id': '95a11ee6eb7ea1cab48c2dc4cf49a5281ef201776b7cd4767bf81dfc420d2898', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-09 01:18:19.334211 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e7f04a0e425a6dc1ec11816fc38c1b82b5f1c035fa5223edd337d7adea05a184', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:18:19.334215 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4cf04e7cd3e4babdac6dcde9e2c3adfe429a41afcb3130842e14ebc0901063fb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-09 01:18:19.334219 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5d4b79ef3f25b89a7bca9a721e8e746185be17e815fcb54d6c78a81db1a66ae5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-09 01:18:19.334223 | orchestrator | ok: [testbed-node-3] => (item={'id': '8205432d78315d9570e2ba5983c4d0aa338dfae6ea207e58298f00f4d26ea2ea', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-09 01:18:19.334227 | orchestrator | ok: [testbed-node-3] => (item={'id': '6d119ddb1f95bd2d74ebd98716c7b0f172aa96fb17b84b474bc5cb21a81f9bd8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-09 01:18:19.334231 | orchestrator | skipping: [testbed-node-3] => (item={'id': '579d97ce7e577d732046bfc2ff179fd3d0e29f5e0b97e655641148cb8435bf26', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-09 01:18:19.334235 | orchestrator | skipping: [testbed-node-3] => (item={'id': '872a92730366777d36fe8de0f798c8c7214fbaac33b05ea615c7aed96b21120b', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-09 01:18:19.334243 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84017b98d7fff9165bb921a6df635f3ae1e0ad47cdb7145a7c05f6058c0067ea', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-09 01:18:19.334247 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2a77c0f6f72a3cff483332b58a5d170723131cb9a936d4103f158a924c1cab4e', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:18:19.334251 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ee0294db6472e4f766b657dd24610333626189ca3502273d0677398aed5d0f40', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:18:19.334255 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2f3453d409dd93c60e14e42e5261937da44acff2d6d65d56d81b84ef03fcd471', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:18:19.334262 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cec59357f6fc24bacac2a286b387f4a0f39896f5892a8d67774e7aa0cc19bd8a', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-09 01:18:19.334267 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5c6ccb8698c98450c3ab2945c67157a42390fe471acff2ce0d195b0241e7134c', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-09 01:18:19.334271 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1a3d44fe079a0058fe41656e99df83922889bce9b065dcbac448ed71bb0f18a1', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:18:19.334281 | orchestrator | skipping: [testbed-node-4] => (item={'id': '08fa8b4edd8ddc7974409194b20560887e6f201c67c0f11f4d40b0f18c89b354', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-04-09 01:18:19.454742 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3eeb3b680f6292571b00c8003204e105d54c1fdad5c1859b8660faa6203b7d42', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-09 01:18:19.454886 | orchestrator | skipping: [testbed-node-4] => (item={'id': '34b9119c3bcfe3481f5d28d64c2ba7b4f7d0a3250b29890f194eb16ab180dcd7', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-09 01:18:19.454901 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9a65434284e99255a180afc74aa9bb5d9d9cfc8480b4ae60509233b0659a073e', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-09 01:18:19.454911 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ca8f485c993548fbaee35edea5c2415f8aa591c5a706bca224bec35cbd637c4a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:18:19.454917 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5c05dc83d8f36aeccef8bd49969887c8ce50f173afb248e0bd7ae2c0479a4a84', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-09 01:18:19.454929 | orchestrator | skipping: [testbed-node-4] => (item={'id': '72ff34cafa530ef1242251fb4a9bae709bc3369decb63139bd98d44a0199df17', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-09 01:18:19.454939 | orchestrator | ok: [testbed-node-4] => (item={'id': '67622bc9bca12dcac216581f2efee29235cc809ecf5b567f9b89a75d19edabee', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-09 01:18:19.454947 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b1e61e8e021404e426c149ac8ef79477b44c39d82f71d09eedd30f2629af5664', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-09 01:18:19.454954 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1b3a7177cff3ec911e8f6416718b2244b5b6a396b87ef22fb0b9036136771169', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-09 01:18:19.454961 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8f617f04aa93978fe6e90a72e98b0d0aaa5fedc58fe9e79a1bae0e48da702e4a', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-09 01:18:19.454997 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6d7a055a8ae8d16d1c3f8a59c0af277b3077d103e9c93dbf6c1907d67410eb17', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-09 01:18:19.455006 | orchestrator | skipping: [testbed-node-4] => (item={'id': '93cce89a650c30d5d31bef186a99eea0d60b9eb78fe265f7ceec968778c6a24a', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:18:19.455013 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fc840279ad6270d636345b3eb2f945b6c3c92b294367f51b429242fde646e3a6', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:18:19.455019 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcc1be7d9fe9ffbdd10540a9f3d71e0b255cfac260a16c88ec34c27f7c64f112', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:18:19.455026 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03c479cfbdddf08361c981eced849f9fdd78f1931627473ebfc3c4d191edcd71', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-09 01:18:19.455053 | orchestrator | skipping: [testbed-node-5] => (item={'id': '956102547ce54354cddb7c5d6e9796a05fd17f9f2bbfab8c281b17160a7c80e0', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-04-09 01:18:19.455059 | orchestrator | skipping: [testbed-node-5] => (item={'id': '734b324ece5050560f49e71754ab6e9b24ee7e807b25b4e33dfad744bd606294', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:18:19.455063 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0c0dd934bcc2eb0a4e7488f7c90a14e49c01cbfa251854ac53c348523489c2dc', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-04-09 01:18:19.455067 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4b95416f23bce18fd4fb8ff753f5e0ae8a2844fecddf4d03fbe7ccd40b0752f8', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-09 01:18:19.455089 | orchestrator | skipping: [testbed-node-5] => (item={'id': '34bf8bfb4d4c192695dd37426335486822c5a16733f71110c1c4d238d5fc526f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-09 01:18:19.455093 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7f06540641366e28efa15e0630500870274b9ee9b25f9e66d487737587983012', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-09 01:18:19.455097 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dfb502953172971dd3506c7717f6cde540e94ed57c03d8fb455fd4f4eb40edad', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:18:19.455101 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2500e4ad50dc2df3b4c9b710f9af3058fe3503ec11abf862c2881213bb6344da', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-09 01:18:19.455108 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd8fba3cf3bf00422669d0f60baf0ddb66e72e7d7d658668255dc74af938666ef', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-09 01:18:19.455117 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a0a7e6cc73781c1ff28f8d8eec23cd27e82102c76f2af66d38f0765e840a31dc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-09 01:18:19.455121 | orchestrator | ok: [testbed-node-5] => (item={'id': '0e0a5d284f437b8150e716d4722624f954204ed2343ef932e2abcbb22e9aae07', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-09 01:18:19.455125 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11f8fb9120b25134af0971976c7faa697572b51e3417cf581832436ce8b0a9d6', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-09 01:18:19.455129 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2403e785f540d5ba44a42d2ee42cfd58ab035e6d85ffabed6cbe629bf5f28e3c', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-09 01:18:19.455133 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3a69d2dcf4aa1414faf49afda50baa865bb4d8896c39cd68d7a42f6596b46230', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-09 01:18:19.455137 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b80ec76219c867333521b2a4bd67dd03267cc205b2ce03839ddccd86719bae6e', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:18:19.455140 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5c6a57d44392f1e76e36aa296fd182532dc8926f0d2362944dfedbd1fea3cfdb', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:18:19.455149 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'acb948930ac4d0d55a717a1acabe2a4ddcb292f00fbe69f6140e55cfc634ea55', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:18:32.146397 | orchestrator | 2026-04-09 01:18:32.146500 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-09 01:18:32.146512 | orchestrator | Thursday 09 April 2026 01:18:19 +0000 (0:00:00.620) 0:00:04.811 ******** 2026-04-09 01:18:32.146518 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.146526 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.146533 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.146539 | orchestrator | 2026-04-09 01:18:32.146545 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-09 01:18:32.146552 | orchestrator | Thursday 09 April 2026 01:18:20 +0000 (0:00:00.305) 0:00:05.116 ******** 2026-04-09 01:18:32.146559 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.146567 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:32.146574 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:32.146580 | orchestrator | 2026-04-09 01:18:32.146587 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-09 01:18:32.146595 | orchestrator | Thursday 09 April 2026 01:18:20 +0000 (0:00:00.283) 0:00:05.400 ******** 2026-04-09 01:18:32.146652 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.146660 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.146666 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.146673 | orchestrator | 2026-04-09 01:18:32.146680 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:32.146687 | orchestrator | Thursday 09 April 2026 01:18:20 +0000 (0:00:00.303) 0:00:05.703 ******** 2026-04-09 01:18:32.146693 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.146699 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.146706 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.146739 | orchestrator | 2026-04-09 01:18:32.146747 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-09 01:18:32.146753 | orchestrator | Thursday 09 April 2026 01:18:21 +0000 (0:00:00.406) 0:00:06.110 ******** 2026-04-09 01:18:32.146761 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-09 01:18:32.146770 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-09 01:18:32.146777 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.146784 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-09 01:18:32.146791 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-09 01:18:32.146798 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:32.146805 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-09 01:18:32.146811 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-09 01:18:32.146817 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:32.146823 | orchestrator | 2026-04-09 01:18:32.146843 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-09 01:18:32.146850 | orchestrator | Thursday 09 April 2026 01:18:21 +0000 (0:00:00.316) 0:00:06.426 ******** 2026-04-09 01:18:32.146856 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.146863 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.146870 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.146877 | orchestrator | 2026-04-09 01:18:32.146884 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 01:18:32.146891 | orchestrator | Thursday 09 April 2026 01:18:21 +0000 (0:00:00.272) 0:00:06.699 ******** 2026-04-09 01:18:32.146897 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.146904 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:32.146911 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:32.146918 | orchestrator | 2026-04-09 01:18:32.146925 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 01:18:32.146932 | orchestrator | Thursday 09 April 2026 01:18:21 +0000 (0:00:00.268) 0:00:06.967 ******** 2026-04-09 01:18:32.146939 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.146946 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:32.146952 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:32.146959 | orchestrator | 2026-04-09 01:18:32.146966 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-09 01:18:32.146972 | orchestrator | Thursday 09 April 2026 01:18:22 +0000 (0:00:00.479) 0:00:07.447 ******** 2026-04-09 01:18:32.146979 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.146985 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.146992 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.146998 | orchestrator | 2026-04-09 01:18:32.147005 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:18:32.147012 | orchestrator | Thursday 09 April 2026 01:18:22 +0000 (0:00:00.287) 0:00:07.734 ******** 2026-04-09 01:18:32.147018 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147025 | orchestrator | 2026-04-09 01:18:32.147031 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:18:32.147038 | orchestrator | Thursday 09 April 2026 01:18:22 +0000 (0:00:00.236) 0:00:07.971 ******** 2026-04-09 01:18:32.147044 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147051 | orchestrator | 2026-04-09 01:18:32.147058 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:18:32.147064 | orchestrator | Thursday 09 April 2026 01:18:23 +0000 (0:00:00.244) 0:00:08.216 ******** 2026-04-09 01:18:32.147071 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147077 | orchestrator | 2026-04-09 01:18:32.147084 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:32.147098 | orchestrator | Thursday 09 April 2026 01:18:23 +0000 (0:00:00.231) 0:00:08.447 ******** 2026-04-09 01:18:32.147104 | orchestrator | 2026-04-09 01:18:32.147111 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:32.147117 | orchestrator | Thursday 09 April 2026 01:18:23 +0000 (0:00:00.076) 0:00:08.523 ******** 2026-04-09 01:18:32.147124 | orchestrator | 2026-04-09 01:18:32.147131 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:32.147155 | orchestrator | Thursday 09 April 2026 01:18:23 +0000 (0:00:00.067) 0:00:08.591 ******** 2026-04-09 01:18:32.147162 | orchestrator | 2026-04-09 01:18:32.147169 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:18:32.147176 | orchestrator | Thursday 09 April 2026 01:18:23 +0000 (0:00:00.083) 0:00:08.674 ******** 2026-04-09 01:18:32.147181 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147187 | orchestrator | 2026-04-09 01:18:32.147194 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-09 01:18:32.147200 | orchestrator | Thursday 09 April 2026 01:18:24 +0000 (0:00:00.623) 0:00:09.298 ******** 2026-04-09 01:18:32.147207 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147213 | orchestrator | 2026-04-09 01:18:32.147220 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:32.147227 | orchestrator | Thursday 09 April 2026 01:18:24 +0000 (0:00:00.241) 0:00:09.540 ******** 2026-04-09 01:18:32.147234 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147241 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.147248 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.147255 | orchestrator | 2026-04-09 01:18:32.147262 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-09 01:18:32.147269 | orchestrator | Thursday 09 April 2026 01:18:24 +0000 (0:00:00.296) 0:00:09.837 ******** 2026-04-09 01:18:32.147275 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147281 | orchestrator | 2026-04-09 01:18:32.147288 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-09 01:18:32.147294 | orchestrator | Thursday 09 April 2026 01:18:24 +0000 (0:00:00.233) 0:00:10.070 ******** 2026-04-09 01:18:32.147301 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:18:32.147308 | orchestrator | 2026-04-09 01:18:32.147315 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-09 01:18:32.147322 | orchestrator | Thursday 09 April 2026 01:18:27 +0000 (0:00:02.066) 0:00:12.136 ******** 2026-04-09 01:18:32.147328 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147335 | orchestrator | 2026-04-09 01:18:32.147342 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-09 01:18:32.147349 | orchestrator | Thursday 09 April 2026 01:18:27 +0000 (0:00:00.124) 0:00:12.261 ******** 2026-04-09 01:18:32.147356 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147363 | orchestrator | 2026-04-09 01:18:32.147369 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-09 01:18:32.147376 | orchestrator | Thursday 09 April 2026 01:18:27 +0000 (0:00:00.284) 0:00:12.546 ******** 2026-04-09 01:18:32.147383 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147389 | orchestrator | 2026-04-09 01:18:32.147396 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-09 01:18:32.147403 | orchestrator | Thursday 09 April 2026 01:18:27 +0000 (0:00:00.132) 0:00:12.678 ******** 2026-04-09 01:18:32.147409 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147416 | orchestrator | 2026-04-09 01:18:32.147422 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:32.147434 | orchestrator | Thursday 09 April 2026 01:18:27 +0000 (0:00:00.127) 0:00:12.806 ******** 2026-04-09 01:18:32.147441 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147447 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.147454 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.147466 | orchestrator | 2026-04-09 01:18:32.147472 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-09 01:18:32.147479 | orchestrator | Thursday 09 April 2026 01:18:28 +0000 (0:00:00.453) 0:00:13.259 ******** 2026-04-09 01:18:32.147485 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:18:32.147492 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:18:32.147498 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:18:32.147505 | orchestrator | 2026-04-09 01:18:32.147510 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-09 01:18:32.147516 | orchestrator | Thursday 09 April 2026 01:18:29 +0000 (0:00:01.700) 0:00:14.960 ******** 2026-04-09 01:18:32.147522 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147527 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.147533 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.147539 | orchestrator | 2026-04-09 01:18:32.147545 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-09 01:18:32.147551 | orchestrator | Thursday 09 April 2026 01:18:30 +0000 (0:00:00.294) 0:00:15.255 ******** 2026-04-09 01:18:32.147557 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147562 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.147568 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.147574 | orchestrator | 2026-04-09 01:18:32.147580 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-09 01:18:32.147586 | orchestrator | Thursday 09 April 2026 01:18:30 +0000 (0:00:00.468) 0:00:15.723 ******** 2026-04-09 01:18:32.147592 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147716 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:32.147727 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:32.147733 | orchestrator | 2026-04-09 01:18:32.147740 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-09 01:18:32.147746 | orchestrator | Thursday 09 April 2026 01:18:31 +0000 (0:00:00.479) 0:00:16.202 ******** 2026-04-09 01:18:32.147753 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:32.147760 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:32.147767 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:32.147774 | orchestrator | 2026-04-09 01:18:32.147781 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-09 01:18:32.147788 | orchestrator | Thursday 09 April 2026 01:18:31 +0000 (0:00:00.326) 0:00:16.529 ******** 2026-04-09 01:18:32.147794 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147800 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:32.147807 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:32.147812 | orchestrator | 2026-04-09 01:18:32.147818 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-09 01:18:32.147825 | orchestrator | Thursday 09 April 2026 01:18:31 +0000 (0:00:00.264) 0:00:16.794 ******** 2026-04-09 01:18:32.147832 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:32.147839 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:32.147846 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:32.147853 | orchestrator | 2026-04-09 01:18:32.147868 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:39.006677 | orchestrator | Thursday 09 April 2026 01:18:32 +0000 (0:00:00.448) 0:00:17.242 ******** 2026-04-09 01:18:39.006746 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:39.006753 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:39.006757 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:39.006761 | orchestrator | 2026-04-09 01:18:39.006766 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-09 01:18:39.006771 | orchestrator | Thursday 09 April 2026 01:18:32 +0000 (0:00:00.456) 0:00:17.699 ******** 2026-04-09 01:18:39.006775 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:39.006779 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:39.006783 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:39.006786 | orchestrator | 2026-04-09 01:18:39.006790 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-09 01:18:39.006813 | orchestrator | Thursday 09 April 2026 01:18:33 +0000 (0:00:00.472) 0:00:18.172 ******** 2026-04-09 01:18:39.006817 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:39.006820 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:39.006824 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:39.006829 | orchestrator | 2026-04-09 01:18:39.006833 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-09 01:18:39.006837 | orchestrator | Thursday 09 April 2026 01:18:33 +0000 (0:00:00.297) 0:00:18.469 ******** 2026-04-09 01:18:39.006841 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:39.006845 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:39.006849 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:39.006853 | orchestrator | 2026-04-09 01:18:39.006856 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-09 01:18:39.006860 | orchestrator | Thursday 09 April 2026 01:18:33 +0000 (0:00:00.461) 0:00:18.930 ******** 2026-04-09 01:18:39.006864 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:39.006868 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:39.006872 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:39.006875 | orchestrator | 2026-04-09 01:18:39.006879 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 01:18:39.006883 | orchestrator | Thursday 09 April 2026 01:18:34 +0000 (0:00:00.298) 0:00:19.229 ******** 2026-04-09 01:18:39.006887 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:39.006891 | orchestrator | 2026-04-09 01:18:39.006895 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 01:18:39.006899 | orchestrator | Thursday 09 April 2026 01:18:34 +0000 (0:00:00.249) 0:00:19.478 ******** 2026-04-09 01:18:39.006903 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:39.006906 | orchestrator | 2026-04-09 01:18:39.006910 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:18:39.006914 | orchestrator | Thursday 09 April 2026 01:18:34 +0000 (0:00:00.243) 0:00:19.721 ******** 2026-04-09 01:18:39.006918 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:39.006921 | orchestrator | 2026-04-09 01:18:39.006925 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:18:39.006929 | orchestrator | Thursday 09 April 2026 01:18:36 +0000 (0:00:01.677) 0:00:21.399 ******** 2026-04-09 01:18:39.006933 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:39.006937 | orchestrator | 2026-04-09 01:18:39.006941 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:18:39.006945 | orchestrator | Thursday 09 April 2026 01:18:36 +0000 (0:00:00.252) 0:00:21.652 ******** 2026-04-09 01:18:39.006949 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:39.006953 | orchestrator | 2026-04-09 01:18:39.006957 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:39.006961 | orchestrator | Thursday 09 April 2026 01:18:36 +0000 (0:00:00.243) 0:00:21.896 ******** 2026-04-09 01:18:39.006965 | orchestrator | 2026-04-09 01:18:39.006969 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:39.006972 | orchestrator | Thursday 09 April 2026 01:18:36 +0000 (0:00:00.068) 0:00:21.964 ******** 2026-04-09 01:18:39.006976 | orchestrator | 2026-04-09 01:18:39.006980 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:39.006984 | orchestrator | Thursday 09 April 2026 01:18:37 +0000 (0:00:00.219) 0:00:22.183 ******** 2026-04-09 01:18:39.006987 | orchestrator | 2026-04-09 01:18:39.006991 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 01:18:39.007028 | orchestrator | Thursday 09 April 2026 01:18:37 +0000 (0:00:00.068) 0:00:22.252 ******** 2026-04-09 01:18:39.007032 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:39.007036 | orchestrator | 2026-04-09 01:18:39.007040 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:18:39.007047 | orchestrator | Thursday 09 April 2026 01:18:38 +0000 (0:00:01.221) 0:00:23.473 ******** 2026-04-09 01:18:39.007053 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-09 01:18:39.007059 | orchestrator |  "msg": [ 2026-04-09 01:18:39.007065 | orchestrator |  "Validator run completed.", 2026-04-09 01:18:39.007071 | orchestrator |  "You can find the report file here:", 2026-04-09 01:18:39.007077 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-09T01:18:16+00:00-report.json", 2026-04-09 01:18:39.007087 | orchestrator |  "on the following host:", 2026-04-09 01:18:39.007094 | orchestrator |  "testbed-manager" 2026-04-09 01:18:39.007101 | orchestrator |  ] 2026-04-09 01:18:39.007107 | orchestrator | } 2026-04-09 01:18:39.007113 | orchestrator | 2026-04-09 01:18:39.007119 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:18:39.007126 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 01:18:39.007134 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:18:39.007154 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:18:39.007160 | orchestrator | 2026-04-09 01:18:39.007165 | orchestrator | 2026-04-09 01:18:39.007171 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:18:39.007177 | orchestrator | Thursday 09 April 2026 01:18:38 +0000 (0:00:00.380) 0:00:23.854 ******** 2026-04-09 01:18:39.007184 | orchestrator | =============================================================================== 2026-04-09 01:18:39.007190 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.07s 2026-04-09 01:18:39.007196 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.70s 2026-04-09 01:18:39.007202 | orchestrator | Aggregate test results step one ----------------------------------------- 1.68s 2026-04-09 01:18:39.007208 | orchestrator | Write report file ------------------------------------------------------- 1.22s 2026-04-09 01:18:39.007214 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-09 01:18:39.007220 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2026-04-09 01:18:39.007226 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-04-09 01:18:39.007232 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.62s 2026-04-09 01:18:39.007238 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.48s 2026-04-09 01:18:39.007244 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.48s 2026-04-09 01:18:39.007250 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.47s 2026-04-09 01:18:39.007256 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2026-04-09 01:18:39.007262 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.46s 2026-04-09 01:18:39.007268 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2026-04-09 01:18:39.007275 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-04-09 01:18:39.007281 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.45s 2026-04-09 01:18:39.007287 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.44s 2026-04-09 01:18:39.007293 | orchestrator | Prepare test data ------------------------------------------------------- 0.41s 2026-04-09 01:18:39.007300 | orchestrator | Print report file information ------------------------------------------- 0.38s 2026-04-09 01:18:39.007307 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2026-04-09 01:18:39.182348 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-09 01:18:39.190999 | orchestrator | + set -e 2026-04-09 01:18:39.191176 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:18:39.191297 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:18:39.191312 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:18:39.191317 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:18:39.191322 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:18:39.191327 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:18:39.191334 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:18:39.191340 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:18:39.191345 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:18:39.191350 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-09 01:18:39.191356 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-09 01:18:39.191361 | orchestrator | ++ export ARA=false 2026-04-09 01:18:39.191367 | orchestrator | ++ ARA=false 2026-04-09 01:18:39.191372 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:18:39.191377 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:18:39.191382 | orchestrator | ++ export TEMPEST=true 2026-04-09 01:18:39.191388 | orchestrator | ++ TEMPEST=true 2026-04-09 01:18:39.191393 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:18:39.191398 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:18:39.191403 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 01:18:39.191408 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 01:18:39.191413 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:18:39.191418 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:18:39.191423 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:18:39.191427 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:18:39.191432 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:18:39.191437 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:18:39.191443 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:18:39.191448 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:18:39.191453 | orchestrator | + source /etc/os-release 2026-04-09 01:18:39.191458 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-09 01:18:39.191463 | orchestrator | ++ NAME=Ubuntu 2026-04-09 01:18:39.191468 | orchestrator | ++ VERSION_ID=24.04 2026-04-09 01:18:39.191473 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-09 01:18:39.191478 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-09 01:18:39.191483 | orchestrator | ++ ID=ubuntu 2026-04-09 01:18:39.191488 | orchestrator | ++ ID_LIKE=debian 2026-04-09 01:18:39.191493 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-09 01:18:39.191498 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-09 01:18:39.191504 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-09 01:18:39.191520 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-09 01:18:39.191526 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-09 01:18:39.191531 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-09 01:18:39.191536 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-09 01:18:39.191542 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-09 01:18:39.191548 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-09 01:18:39.224810 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-09 01:19:02.042582 | orchestrator | 2026-04-09 01:19:02.042705 | orchestrator | # Status of Elasticsearch 2026-04-09 01:19:02.042720 | orchestrator | 2026-04-09 01:19:02.042728 | orchestrator | + pushd /opt/configuration/contrib 2026-04-09 01:19:02.042736 | orchestrator | + echo 2026-04-09 01:19:02.042742 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-09 01:19:02.042748 | orchestrator | + echo 2026-04-09 01:19:02.042754 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-09 01:19:02.205253 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-09 01:19:02.205344 | orchestrator | 2026-04-09 01:19:02.205357 | orchestrator | # Status of MariaDB 2026-04-09 01:19:02.205365 | orchestrator | 2026-04-09 01:19:02.205372 | orchestrator | + echo 2026-04-09 01:19:02.205380 | orchestrator | + echo '# Status of MariaDB' 2026-04-09 01:19:02.205386 | orchestrator | + echo 2026-04-09 01:19:02.205897 | orchestrator | ++ semver latest 10.0.0-0 2026-04-09 01:19:02.254485 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 01:19:02.254570 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 01:19:02.254578 | orchestrator | + osism status database 2026-04-09 01:19:03.793857 | orchestrator | 2026-04-09 01:19:03 | ERROR  | Unable to get ansible vault password 2026-04-09 01:19:03.793932 | orchestrator | 2026-04-09 01:19:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:19:03.793943 | orchestrator | 2026-04-09 01:19:03 | ERROR  | Dropping encrypted entries 2026-04-09 01:19:03.829018 | orchestrator | 2026-04-09 01:19:03 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-09 01:19:03.840609 | orchestrator | 2026-04-09 01:19:03 | INFO  | Cluster Status: Primary 2026-04-09 01:19:03.840735 | orchestrator | 2026-04-09 01:19:03 | INFO  | Connected: ON 2026-04-09 01:19:03.840748 | orchestrator | 2026-04-09 01:19:03 | INFO  | Ready: ON 2026-04-09 01:19:03.840756 | orchestrator | 2026-04-09 01:19:03 | INFO  | Cluster Size: 3 2026-04-09 01:19:03.840762 | orchestrator | 2026-04-09 01:19:03 | INFO  | Local State: Synced 2026-04-09 01:19:03.840770 | orchestrator | 2026-04-09 01:19:03 | INFO  | Cluster State UUID: e44b6ae5-33ae-11f1-a0e2-82534f016b3d 2026-04-09 01:19:03.840778 | orchestrator | 2026-04-09 01:19:03 | INFO  | Cluster Members: 192.168.16.12:3306,192.168.16.10:3306,192.168.16.11:3306 2026-04-09 01:19:03.840787 | orchestrator | 2026-04-09 01:19:03 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-09 01:19:03.840792 | orchestrator | 2026-04-09 01:19:03 | INFO  | Local Node UUID: 16fcbde3-33af-11f1-9ed7-ab10c32ad066 2026-04-09 01:19:03.840797 | orchestrator | 2026-04-09 01:19:03 | INFO  | Flow Control Paused: 0.00% 2026-04-09 01:19:03.840801 | orchestrator | 2026-04-09 01:19:03 | INFO  | Recv Queue Avg: 0 2026-04-09 01:19:03.840815 | orchestrator | 2026-04-09 01:19:03 | INFO  | Send Queue Avg: 0.00105279 2026-04-09 01:19:03.841059 | orchestrator | 2026-04-09 01:19:03 | INFO  | Transactions: 4391 local commits, 6592 replicated, 72 received 2026-04-09 01:19:03.841099 | orchestrator | 2026-04-09 01:19:03 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-09 01:19:03.841200 | orchestrator | 2026-04-09 01:19:03 | INFO  | MariaDB Uptime: 21 minutes, 31 seconds 2026-04-09 01:19:03.841215 | orchestrator | 2026-04-09 01:19:03 | INFO  | Threads: 150 connected, 1 running 2026-04-09 01:19:03.841342 | orchestrator | 2026-04-09 01:19:03 | INFO  | Queries: 181606 total, 0 slow 2026-04-09 01:19:03.841352 | orchestrator | 2026-04-09 01:19:03 | INFO  | Aborted Connects: 151 2026-04-09 01:19:03.841356 | orchestrator | 2026-04-09 01:19:03 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-09 01:19:04.040036 | orchestrator | 2026-04-09 01:19:04.040107 | orchestrator | # Status of Prometheus 2026-04-09 01:19:04.040114 | orchestrator | 2026-04-09 01:19:04.040119 | orchestrator | + echo 2026-04-09 01:19:04.040123 | orchestrator | + echo '# Status of Prometheus' 2026-04-09 01:19:04.040128 | orchestrator | + echo 2026-04-09 01:19:04.040132 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-09 01:19:04.090976 | orchestrator | Unauthorized 2026-04-09 01:19:04.094091 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-09 01:19:04.151937 | orchestrator | Unauthorized 2026-04-09 01:19:04.154891 | orchestrator | 2026-04-09 01:19:04.154957 | orchestrator | # Status of RabbitMQ 2026-04-09 01:19:04.154963 | orchestrator | 2026-04-09 01:19:04.154968 | orchestrator | + echo 2026-04-09 01:19:04.154972 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-09 01:19:04.154977 | orchestrator | + echo 2026-04-09 01:19:04.155784 | orchestrator | ++ semver latest 10.0.0-0 2026-04-09 01:19:04.217790 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 01:19:04.217865 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 01:19:04.217877 | orchestrator | + osism status messaging 2026-04-09 01:19:11.197119 | orchestrator | 2026-04-09 01:19:11 | ERROR  | Unable to get ansible vault password 2026-04-09 01:19:11.197201 | orchestrator | 2026-04-09 01:19:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:19:11.197212 | orchestrator | 2026-04-09 01:19:11 | ERROR  | Dropping encrypted entries 2026-04-09 01:19:11.231274 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-09 01:19:11.287350 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] RabbitMQ Version: 4.1.8 2026-04-09 01:19:11.287434 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Erlang Version: 27.3.4.1 2026-04-09 01:19:11.287896 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-09 01:19:11.288152 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-09 01:19:11.288817 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:19:11.289133 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:19:11.290902 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-09 01:19:11.290936 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Connections: 210, Channels: 209, Queues: 173 2026-04-09 01:19:11.290941 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Messages: 230 total, 230 ready, 0 unacked 2026-04-09 01:19:11.290946 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Message Rates: 7.4/s publish, 7.8/s deliver 2026-04-09 01:19:11.290950 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Disk Free: 57.9 GB (limit: 0.0 GB) 2026-04-09 01:19:11.290954 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-09 01:19:11.290958 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] File Descriptors: 112/1024 2026-04-09 01:19:11.290962 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-0] Sockets: 0/0 2026-04-09 01:19:11.291013 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-09 01:19:11.350825 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] RabbitMQ Version: 4.1.8 2026-04-09 01:19:11.350946 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Erlang Version: 27.3.4.1 2026-04-09 01:19:11.350960 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-09 01:19:11.350977 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-09 01:19:11.351145 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:19:11.351486 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:19:11.351758 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-09 01:19:11.351958 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Connections: 210, Channels: 209, Queues: 173 2026-04-09 01:19:11.352372 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Messages: 230 total, 230 ready, 0 unacked 2026-04-09 01:19:11.352549 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Message Rates: 7.4/s publish, 7.8/s deliver 2026-04-09 01:19:11.352972 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-09 01:19:11.353025 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-09 01:19:11.353204 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] File Descriptors: 111/1024 2026-04-09 01:19:11.353430 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-1] Sockets: 0/0 2026-04-09 01:19:11.353742 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-09 01:19:11.416673 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] RabbitMQ Version: 4.1.8 2026-04-09 01:19:11.416971 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Erlang Version: 27.3.4.1 2026-04-09 01:19:11.416980 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-09 01:19:11.416998 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-09 01:19:11.417007 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:19:11.417188 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:19:11.417464 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-09 01:19:11.418154 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Connections: 210, Channels: 209, Queues: 173 2026-04-09 01:19:11.418268 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Messages: 230 total, 230 ready, 0 unacked 2026-04-09 01:19:11.418287 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Message Rates: 7.4/s publish, 7.8/s deliver 2026-04-09 01:19:11.418571 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Disk Free: 58.6 GB (limit: 0.0 GB) 2026-04-09 01:19:11.418842 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-04-09 01:19:11.419007 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] File Descriptors: 107/1024 2026-04-09 01:19:11.419228 | orchestrator | 2026-04-09 01:19:11 | INFO  | [testbed-node-2] Sockets: 0/0 2026-04-09 01:19:11.419588 | orchestrator | 2026-04-09 01:19:11 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-09 01:19:11.642382 | orchestrator | 2026-04-09 01:19:11.642464 | orchestrator | # Status of Redis 2026-04-09 01:19:11.642471 | orchestrator | 2026-04-09 01:19:11.642476 | orchestrator | + echo 2026-04-09 01:19:11.642480 | orchestrator | + echo '# Status of Redis' 2026-04-09 01:19:11.642486 | orchestrator | + echo 2026-04-09 01:19:11.642491 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-09 01:19:11.648624 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001364s;;;0.000000;10.000000 2026-04-09 01:19:11.648736 | orchestrator | 2026-04-09 01:19:11.648749 | orchestrator | # Create backup of MariaDB database 2026-04-09 01:19:11.648757 | orchestrator | 2026-04-09 01:19:11.648763 | orchestrator | + popd 2026-04-09 01:19:11.648770 | orchestrator | + echo 2026-04-09 01:19:11.648776 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-09 01:19:11.648781 | orchestrator | + echo 2026-04-09 01:19:11.648786 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-09 01:19:12.880824 | orchestrator | 2026-04-09 01:19:12 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-09 01:19:12.940635 | orchestrator | 2026-04-09 01:19:12 | INFO  | Task 8a6f56a7-7017-4b53-b9ce-37bcffec25c8 (mariadb_backup) was prepared for execution. 2026-04-09 01:19:12.940764 | orchestrator | 2026-04-09 01:19:12 | INFO  | It takes a moment until task 8a6f56a7-7017-4b53-b9ce-37bcffec25c8 (mariadb_backup) has been started and output is visible here. 2026-04-09 01:21:09.553034 | orchestrator | 2026-04-09 01:21:09.553133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:21:09.553146 | orchestrator | 2026-04-09 01:21:09.553155 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:21:09.553164 | orchestrator | Thursday 09 April 2026 01:19:16 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-04-09 01:21:09.553172 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:21:09.553182 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:21:09.553189 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:21:09.553198 | orchestrator | 2026-04-09 01:21:09.553207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:21:09.553215 | orchestrator | Thursday 09 April 2026 01:19:16 +0000 (0:00:00.298) 0:00:00.522 ******** 2026-04-09 01:21:09.553223 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 01:21:09.553233 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 01:21:09.553241 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 01:21:09.553250 | orchestrator | 2026-04-09 01:21:09.553258 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 01:21:09.553266 | orchestrator | 2026-04-09 01:21:09.553274 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 01:21:09.553284 | orchestrator | Thursday 09 April 2026 01:19:16 +0000 (0:00:00.384) 0:00:00.907 ******** 2026-04-09 01:21:09.553293 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 01:21:09.553301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 01:21:09.553310 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 01:21:09.553318 | orchestrator | 2026-04-09 01:21:09.553327 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 01:21:09.553335 | orchestrator | Thursday 09 April 2026 01:19:17 +0000 (0:00:00.382) 0:00:01.289 ******** 2026-04-09 01:21:09.553345 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:21:09.553354 | orchestrator | 2026-04-09 01:21:09.553362 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-09 01:21:09.553390 | orchestrator | Thursday 09 April 2026 01:19:17 +0000 (0:00:00.623) 0:00:01.913 ******** 2026-04-09 01:21:09.553399 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:21:09.553408 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:21:09.553416 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:21:09.553425 | orchestrator | 2026-04-09 01:21:09.553433 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-09 01:21:09.553441 | orchestrator | Thursday 09 April 2026 01:19:21 +0000 (0:00:03.576) 0:00:05.490 ******** 2026-04-09 01:21:09.553450 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:21:09.553460 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:21:09.553468 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:21:09.553477 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 01:21:09.553486 | orchestrator | 2026-04-09 01:21:09.553494 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 01:21:09.553503 | orchestrator | skipping: no hosts matched 2026-04-09 01:21:09.553512 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-09 01:21:09.553520 | orchestrator | 2026-04-09 01:21:09.553528 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 01:21:09.553537 | orchestrator | skipping: no hosts matched 2026-04-09 01:21:09.553546 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 01:21:09.553577 | orchestrator | mariadb_bootstrap_restart 2026-04-09 01:21:09.553587 | orchestrator | 2026-04-09 01:21:09.553596 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 01:21:09.553651 | orchestrator | skipping: no hosts matched 2026-04-09 01:21:09.553660 | orchestrator | 2026-04-09 01:21:09.553670 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 01:21:09.553679 | orchestrator | 2026-04-09 01:21:09.553687 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 01:21:09.553695 | orchestrator | Thursday 09 April 2026 01:21:08 +0000 (0:01:47.525) 0:01:53.015 ******** 2026-04-09 01:21:09.553703 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:21:09.553711 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:21:09.553720 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:21:09.553728 | orchestrator | 2026-04-09 01:21:09.553736 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 01:21:09.553746 | orchestrator | Thursday 09 April 2026 01:21:09 +0000 (0:00:00.276) 0:01:53.292 ******** 2026-04-09 01:21:09.553774 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:21:09.553783 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:21:09.553792 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:21:09.553800 | orchestrator | 2026-04-09 01:21:09.553809 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:21:09.553819 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:21:09.553830 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 01:21:09.553840 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 01:21:09.553849 | orchestrator | 2026-04-09 01:21:09.553858 | orchestrator | 2026-04-09 01:21:09.553867 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:21:09.553875 | orchestrator | Thursday 09 April 2026 01:21:09 +0000 (0:00:00.198) 0:01:53.490 ******** 2026-04-09 01:21:09.553883 | orchestrator | =============================================================================== 2026-04-09 01:21:09.553893 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 107.53s 2026-04-09 01:21:09.553939 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.58s 2026-04-09 01:21:09.553949 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.62s 2026-04-09 01:21:09.553958 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-04-09 01:21:09.553967 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2026-04-09 01:21:09.553976 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-09 01:21:09.553984 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2026-04-09 01:21:09.553991 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.20s 2026-04-09 01:21:09.714080 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-09 01:21:09.722263 | orchestrator | + set -e 2026-04-09 01:21:09.722353 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:21:09.722363 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:21:09.722391 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:21:09.722398 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:21:09.722404 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:21:09.722410 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 01:21:09.723260 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:21:09.728841 | orchestrator | 2026-04-09 01:21:09.728948 | orchestrator | # OpenStack endpoints 2026-04-09 01:21:09.728963 | orchestrator | 2026-04-09 01:21:09.729000 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:21:09.729010 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:21:09.729019 | orchestrator | + export OS_CLOUD=admin 2026-04-09 01:21:09.729028 | orchestrator | + OS_CLOUD=admin 2026-04-09 01:21:09.729035 | orchestrator | + echo 2026-04-09 01:21:09.729041 | orchestrator | + echo '# OpenStack endpoints' 2026-04-09 01:21:09.729046 | orchestrator | + echo 2026-04-09 01:21:09.729053 | orchestrator | + openstack endpoint list 2026-04-09 01:21:12.663483 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 01:21:12.663574 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-09 01:21:12.663584 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 01:21:12.663591 | orchestrator | | 094fc5b273774e9488f3409fd734e823 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-09 01:21:12.663598 | orchestrator | | 19dbc7114cb3470fa3b3fbf7d8934c2f | RegionOne | cinder | block-storage | True | public | https://api.testbed.osism.xyz:8776/v3 | 2026-04-09 01:21:12.663604 | orchestrator | | 2c6c2335c53543e2ab1376a085146130 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-09 01:21:12.663611 | orchestrator | | 37d0beba87064a1ca5ef8d75ab396e3e | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-09 01:21:12.663617 | orchestrator | | 38224b3823b64786bc3c30f426414aac | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 01:21:12.663628 | orchestrator | | 4c6ef47be5284a28a0b1300c45546da4 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-09 01:21:12.663635 | orchestrator | | 517cf0159dc047c89aa37f0a4a403071 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-09 01:21:12.663640 | orchestrator | | 594108d8a095478f9ff5c90520c6fc99 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-09 01:21:12.663647 | orchestrator | | 63a44d09b95241f4b2668379e0200e45 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-09 01:21:12.663653 | orchestrator | | 6c5a668fcfe1423e87734ff6c257ed35 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 01:21:12.663660 | orchestrator | | 7e832ea8ea9449edaec52e1b18169af8 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 01:21:12.663666 | orchestrator | | 7fe677e1009a49b1b6509819e04886da | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-09 01:21:12.663673 | orchestrator | | 8223d4b1b1f44b63b948ce3771788d91 | RegionOne | cinder | block-storage | True | internal | https://api-int.testbed.osism.xyz:8776/v3 | 2026-04-09 01:21:12.663680 | orchestrator | | 85605c8b913b4fe4903a45ebc4109318 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-09 01:21:12.663686 | orchestrator | | 870cb114913748ceadca8a0cce29630e | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 01:21:12.663694 | orchestrator | | 940ef2265fcb44558c8fbceecdb89424 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-09 01:21:12.663727 | orchestrator | | 94981aa071914ce5ab2c00458cfd3a55 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-09 01:21:12.663734 | orchestrator | | 99ca48a7d4a84549897a083380daa9af | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-09 01:21:12.663754 | orchestrator | | 9db67a416b354722bcfe24e02809a4e7 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-09 01:21:12.663761 | orchestrator | | c0b869de40174c9d85d471fdc208fd8b | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-09 01:21:12.663782 | orchestrator | | c55cb5a40dc54bc1bc50a730210b043f | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-09 01:21:12.663789 | orchestrator | | eb9bbb4f33044a0e8dc7de2388aad98d | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-09 01:21:12.663796 | orchestrator | | fdf48dc157f546f29da42306e56b20b8 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-09 01:21:12.663802 | orchestrator | | fe710b22b06b4ed3975ec16ec90d3a9c | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-09 01:21:12.663809 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 01:21:12.868874 | orchestrator | 2026-04-09 01:21:12.868999 | orchestrator | # Cinder 2026-04-09 01:21:12.869010 | orchestrator | 2026-04-09 01:21:12.869016 | orchestrator | + echo 2026-04-09 01:21:12.869021 | orchestrator | + echo '# Cinder' 2026-04-09 01:21:12.869025 | orchestrator | + echo 2026-04-09 01:21:12.869029 | orchestrator | + openstack volume service list 2026-04-09 01:21:16.439868 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 01:21:16.439970 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 01:21:16.439977 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 01:21:16.439982 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T01:21:12.000000 | 2026-04-09 01:21:16.439987 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T01:21:11.000000 | 2026-04-09 01:21:16.439991 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T01:21:12.000000 | 2026-04-09 01:21:16.439995 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-09T01:21:12.000000 | 2026-04-09 01:21:16.439999 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-09T01:21:10.000000 | 2026-04-09 01:21:16.440003 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-09T01:21:11.000000 | 2026-04-09 01:21:16.440006 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-09T01:21:15.000000 | 2026-04-09 01:21:16.440010 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-09T01:21:07.000000 | 2026-04-09 01:21:16.440014 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-09T01:21:07.000000 | 2026-04-09 01:21:16.440018 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 01:21:16.657668 | orchestrator | 2026-04-09 01:21:16.657757 | orchestrator | # Neutron 2026-04-09 01:21:16.657764 | orchestrator | 2026-04-09 01:21:16.657769 | orchestrator | + echo 2026-04-09 01:21:16.657774 | orchestrator | + echo '# Neutron' 2026-04-09 01:21:16.657780 | orchestrator | + echo 2026-04-09 01:21:16.657785 | orchestrator | + openstack network agent list 2026-04-09 01:21:19.372041 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 01:21:19.372134 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-09 01:21:19.372142 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 01:21:19.372147 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-09 01:21:19.372151 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-09 01:21:19.372156 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-09 01:21:19.372159 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-09 01:21:19.372163 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-09 01:21:19.372181 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-09 01:21:19.372185 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 01:21:19.372189 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 01:21:19.372193 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 01:21:19.372197 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 01:21:19.605112 | orchestrator | + openstack network service provider list 2026-04-09 01:21:21.997643 | orchestrator | +---------------+------+---------+ 2026-04-09 01:21:21.997763 | orchestrator | | Service Type | Name | Default | 2026-04-09 01:21:21.997776 | orchestrator | +---------------+------+---------+ 2026-04-09 01:21:21.997782 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-09 01:21:21.997788 | orchestrator | +---------------+------+---------+ 2026-04-09 01:21:22.227184 | orchestrator | 2026-04-09 01:21:22.227268 | orchestrator | # Nova 2026-04-09 01:21:22.227278 | orchestrator | 2026-04-09 01:21:22.227285 | orchestrator | + echo 2026-04-09 01:21:22.227292 | orchestrator | + echo '# Nova' 2026-04-09 01:21:22.227299 | orchestrator | + echo 2026-04-09 01:21:22.227306 | orchestrator | + openstack compute service list 2026-04-09 01:21:25.500712 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 01:21:25.500779 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 01:21:25.500785 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 01:21:25.500790 | orchestrator | | 13760a1c-4f87-418e-a496-944a21198ec8 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T01:21:18.000000 | 2026-04-09 01:21:25.500794 | orchestrator | | dd2448cd-686f-4903-ade7-ea3c2a608b54 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T01:21:18.000000 | 2026-04-09 01:21:25.500822 | orchestrator | | f5bd0f55-74e9-4b5f-8782-f67acc537d25 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T01:21:24.000000 | 2026-04-09 01:21:25.500830 | orchestrator | | e4379616-c17e-4769-8caf-afeef8475e5d | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-09T01:21:22.000000 | 2026-04-09 01:21:25.500837 | orchestrator | | dbe11c65-a58d-4e3c-a4d8-f1dd5f1a1389 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-09T01:21:22.000000 | 2026-04-09 01:21:25.500842 | orchestrator | | 99da59e9-1802-4707-9391-4f5a80a58a3c | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-09T01:21:23.000000 | 2026-04-09 01:21:25.500848 | orchestrator | | 157dfde1-b3b1-429d-bd20-468f619d6395 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-09T01:21:18.000000 | 2026-04-09 01:21:25.500854 | orchestrator | | 7d619ad8-9606-4327-aed5-23d961c68ed4 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-09T01:21:19.000000 | 2026-04-09 01:21:25.500860 | orchestrator | | 934a1676-df9a-4fe3-a1fc-03a53c0b7d70 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-09T01:21:19.000000 | 2026-04-09 01:21:25.500866 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 01:21:25.720656 | orchestrator | + openstack hypervisor list 2026-04-09 01:21:28.171771 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 01:21:28.171869 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-09 01:21:28.171880 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 01:21:28.171889 | orchestrator | | 2bf6affc-6f4d-4f72-b0e7-c310920d2f94 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-09 01:21:28.171897 | orchestrator | | f978236f-598d-495d-88f5-38e8fbd52c0b | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-09 01:21:28.171905 | orchestrator | | 884c2ed6-283f-4cde-a1eb-5db113e594d5 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-09 01:21:28.171913 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 01:21:28.408433 | orchestrator | 2026-04-09 01:21:28.408527 | orchestrator | # Run OpenStack test play 2026-04-09 01:21:28.408535 | orchestrator | 2026-04-09 01:21:28.408539 | orchestrator | + echo 2026-04-09 01:21:28.408544 | orchestrator | + echo '# Run OpenStack test play' 2026-04-09 01:21:28.408549 | orchestrator | + echo 2026-04-09 01:21:28.408553 | orchestrator | + osism apply --environment openstack test 2026-04-09 01:21:29.725532 | orchestrator | 2026-04-09 01:21:29 | INFO  | Trying to run play test in environment openstack 2026-04-09 01:21:29.755124 | orchestrator | 2026-04-09 01:21:29 | INFO  | Prepare task for execution of test. 2026-04-09 01:21:29.834367 | orchestrator | 2026-04-09 01:21:29 | INFO  | Task cce55834-7767-4afe-95da-24ecb284b7c1 (test) was prepared for execution. 2026-04-09 01:21:29.834458 | orchestrator | 2026-04-09 01:21:29 | INFO  | It takes a moment until task cce55834-7767-4afe-95da-24ecb284b7c1 (test) has been started and output is visible here. 2026-04-09 01:24:39.558975 | orchestrator | 2026-04-09 01:24:39.559075 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-09 01:24:39.559087 | orchestrator | 2026-04-09 01:24:39.559094 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-09 01:24:39.559102 | orchestrator | Thursday 09 April 2026 01:21:32 +0000 (0:00:00.100) 0:00:00.100 ******** 2026-04-09 01:24:39.559109 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559116 | orchestrator | 2026-04-09 01:24:39.559122 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-09 01:24:39.559129 | orchestrator | Thursday 09 April 2026 01:21:36 +0000 (0:00:03.662) 0:00:03.763 ******** 2026-04-09 01:24:39.559135 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559142 | orchestrator | 2026-04-09 01:24:39.559149 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-09 01:24:39.559175 | orchestrator | Thursday 09 April 2026 01:21:40 +0000 (0:00:04.268) 0:00:08.032 ******** 2026-04-09 01:24:39.559181 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559235 | orchestrator | 2026-04-09 01:24:39.559243 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-09 01:24:39.559251 | orchestrator | Thursday 09 April 2026 01:21:47 +0000 (0:00:06.396) 0:00:14.428 ******** 2026-04-09 01:24:39.559257 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559263 | orchestrator | 2026-04-09 01:24:39.559270 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-09 01:24:39.559276 | orchestrator | Thursday 09 April 2026 01:21:51 +0000 (0:00:04.227) 0:00:18.655 ******** 2026-04-09 01:24:39.559283 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559290 | orchestrator | 2026-04-09 01:24:39.559297 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-09 01:24:39.559304 | orchestrator | Thursday 09 April 2026 01:21:55 +0000 (0:00:04.076) 0:00:22.732 ******** 2026-04-09 01:24:39.559310 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-09 01:24:39.559317 | orchestrator | changed: [localhost] => (item=member) 2026-04-09 01:24:39.559325 | orchestrator | changed: [localhost] => (item=creator) 2026-04-09 01:24:39.559332 | orchestrator | 2026-04-09 01:24:39.559338 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-09 01:24:39.559344 | orchestrator | Thursday 09 April 2026 01:22:07 +0000 (0:00:11.699) 0:00:34.431 ******** 2026-04-09 01:24:39.559351 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559357 | orchestrator | 2026-04-09 01:24:39.559364 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-09 01:24:39.559371 | orchestrator | Thursday 09 April 2026 01:22:11 +0000 (0:00:04.490) 0:00:38.921 ******** 2026-04-09 01:24:39.559377 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559384 | orchestrator | 2026-04-09 01:24:39.559391 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-09 01:24:39.559397 | orchestrator | Thursday 09 April 2026 01:22:16 +0000 (0:00:04.536) 0:00:43.458 ******** 2026-04-09 01:24:39.559403 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559409 | orchestrator | 2026-04-09 01:24:39.559415 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-09 01:24:39.559421 | orchestrator | Thursday 09 April 2026 01:22:20 +0000 (0:00:04.317) 0:00:47.776 ******** 2026-04-09 01:24:39.559428 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559434 | orchestrator | 2026-04-09 01:24:39.559440 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-09 01:24:39.559446 | orchestrator | Thursday 09 April 2026 01:22:24 +0000 (0:00:04.033) 0:00:51.809 ******** 2026-04-09 01:24:39.559452 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559458 | orchestrator | 2026-04-09 01:24:39.559465 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-09 01:24:39.559471 | orchestrator | Thursday 09 April 2026 01:22:28 +0000 (0:00:03.977) 0:00:55.787 ******** 2026-04-09 01:24:39.559477 | orchestrator | changed: [localhost] 2026-04-09 01:24:39.559483 | orchestrator | 2026-04-09 01:24:39.559489 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-09 01:24:39.559515 | orchestrator | Thursday 09 April 2026 01:22:32 +0000 (0:00:03.681) 0:00:59.469 ******** 2026-04-09 01:24:39.559521 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-09 01:24:39.559528 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-09 01:24:39.559535 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-09 01:24:39.559542 | orchestrator | 2026-04-09 01:24:39.559549 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-09 01:24:39.559556 | orchestrator | Thursday 09 April 2026 01:22:45 +0000 (0:00:13.109) 0:01:12.579 ******** 2026-04-09 01:24:39.559564 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-09 01:24:39.559580 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-09 01:24:39.559586 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-09 01:24:39.559593 | orchestrator | 2026-04-09 01:24:39.559599 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-09 01:24:39.559606 | orchestrator | Thursday 09 April 2026 01:23:01 +0000 (0:00:16.056) 0:01:28.636 ******** 2026-04-09 01:24:39.559612 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-09 01:24:39.559619 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-09 01:24:39.559625 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-09 01:24:39.559632 | orchestrator | 2026-04-09 01:24:39.559639 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-09 01:24:39.559645 | orchestrator | 2026-04-09 01:24:39.559652 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-09 01:24:39.559678 | orchestrator | Thursday 09 April 2026 01:23:32 +0000 (0:00:30.918) 0:01:59.554 ******** 2026-04-09 01:24:39.559691 | orchestrator | ok: [localhost] 2026-04-09 01:24:39.559699 | orchestrator | 2026-04-09 01:24:39.559706 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-09 01:24:39.559712 | orchestrator | Thursday 09 April 2026 01:23:36 +0000 (0:00:03.757) 0:02:03.312 ******** 2026-04-09 01:24:39.559719 | orchestrator | skipping: [localhost] 2026-04-09 01:24:39.559725 | orchestrator | 2026-04-09 01:24:39.559732 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-09 01:24:39.559738 | orchestrator | Thursday 09 April 2026 01:23:36 +0000 (0:00:00.059) 0:02:03.372 ******** 2026-04-09 01:24:39.559745 | orchestrator | skipping: [localhost] 2026-04-09 01:24:39.559752 | orchestrator | 2026-04-09 01:24:39.559758 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-09 01:24:39.559765 | orchestrator | Thursday 09 April 2026 01:23:36 +0000 (0:00:00.055) 0:02:03.428 ******** 2026-04-09 01:24:39.559771 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-09 01:24:39.559778 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-09 01:24:39.559785 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-09 01:24:39.559791 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-09 01:24:39.559798 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-09 01:24:39.559805 | orchestrator | skipping: [localhost] 2026-04-09 01:24:39.559811 | orchestrator | 2026-04-09 01:24:39.559818 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-09 01:24:39.559825 | orchestrator | Thursday 09 April 2026 01:23:36 +0000 (0:00:00.159) 0:02:03.588 ******** 2026-04-09 01:24:39.559831 | orchestrator | skipping: [localhost] 2026-04-09 01:24:39.559837 | orchestrator | 2026-04-09 01:24:39.559844 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-09 01:24:39.559850 | orchestrator | Thursday 09 April 2026 01:23:36 +0000 (0:00:00.147) 0:02:03.735 ******** 2026-04-09 01:24:39.559856 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:24:39.559862 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:24:39.559869 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:24:39.559875 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:24:39.559882 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:24:39.559888 | orchestrator | 2026-04-09 01:24:39.559895 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-09 01:24:39.559908 | orchestrator | Thursday 09 April 2026 01:23:41 +0000 (0:00:04.442) 0:02:08.177 ******** 2026-04-09 01:24:39.559915 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-09 01:24:39.559924 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-09 01:24:39.559931 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-09 01:24:39.559938 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-09 01:24:39.559947 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j735486377135.2807', 'results_file': '/ansible/.ansible_async/j735486377135.2807', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:39.559957 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j993231175064.2832', 'results_file': '/ansible/.ansible_async/j993231175064.2832', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:39.559965 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j309438632321.2857', 'results_file': '/ansible/.ansible_async/j309438632321.2857', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:39.559972 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j426027958707.2882', 'results_file': '/ansible/.ansible_async/j426027958707.2882', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:39.559980 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-09 01:24:39.559987 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j727949454923.2907', 'results_file': '/ansible/.ansible_async/j727949454923.2907', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:39.559994 | orchestrator | 2026-04-09 01:24:39.560001 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-09 01:24:39.560008 | orchestrator | Thursday 09 April 2026 01:24:38 +0000 (0:00:57.538) 0:03:05.716 ******** 2026-04-09 01:24:39.560024 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:25:52.088018 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:25:52.088073 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:25:52.088078 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:25:52.088082 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:25:52.088085 | orchestrator | 2026-04-09 01:25:52.088089 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-09 01:25:52.088092 | orchestrator | Thursday 09 April 2026 01:24:43 +0000 (0:00:04.502) 0:03:10.218 ******** 2026-04-09 01:25:52.088096 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-09 01:25:52.088100 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j747918327592.3017', 'results_file': '/ansible/.ansible_async/j747918327592.3017', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088105 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j789722907486.3042', 'results_file': '/ansible/.ansible_async/j789722907486.3042', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088119 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j226054017018.3067', 'results_file': '/ansible/.ansible_async/j226054017018.3067', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088122 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j791748320650.3092', 'results_file': '/ansible/.ansible_async/j791748320650.3092', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088125 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j256472687730.3117', 'results_file': '/ansible/.ansible_async/j256472687730.3117', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088128 | orchestrator | 2026-04-09 01:25:52.088132 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-09 01:25:52.088135 | orchestrator | Thursday 09 April 2026 01:24:52 +0000 (0:00:09.436) 0:03:19.655 ******** 2026-04-09 01:25:52.088138 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:25:52.088141 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:25:52.088144 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:25:52.088147 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:25:52.088150 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:25:52.088153 | orchestrator | 2026-04-09 01:25:52.088157 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-09 01:25:52.088160 | orchestrator | Thursday 09 April 2026 01:24:57 +0000 (0:00:04.655) 0:03:24.310 ******** 2026-04-09 01:25:52.088163 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-09 01:25:52.088166 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j881801215537.3186', 'results_file': '/ansible/.ansible_async/j881801215537.3186', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088169 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j21029708498.3211', 'results_file': '/ansible/.ansible_async/j21029708498.3211', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088172 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j686530622769.3237', 'results_file': '/ansible/.ansible_async/j686530622769.3237', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088176 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j769673088228.3263', 'results_file': '/ansible/.ansible_async/j769673088228.3263', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088191 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j849131541739.3289', 'results_file': '/ansible/.ansible_async/j849131541739.3289', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:25:52.088195 | orchestrator | 2026-04-09 01:25:52.088198 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-09 01:25:52.088201 | orchestrator | Thursday 09 April 2026 01:25:07 +0000 (0:00:10.072) 0:03:34.383 ******** 2026-04-09 01:25:52.088204 | orchestrator | changed: [localhost] 2026-04-09 01:25:52.088208 | orchestrator | 2026-04-09 01:25:52.088211 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-09 01:25:52.088217 | orchestrator | Thursday 09 April 2026 01:25:13 +0000 (0:00:06.629) 0:03:41.012 ******** 2026-04-09 01:25:52.088220 | orchestrator | changed: [localhost] 2026-04-09 01:25:52.088223 | orchestrator | 2026-04-09 01:25:52.088226 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-09 01:25:52.088229 | orchestrator | Thursday 09 April 2026 01:25:27 +0000 (0:00:13.703) 0:03:54.715 ******** 2026-04-09 01:25:52.088232 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:25:52.088235 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:25:52.088304 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:25:52.088311 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:25:52.088316 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:25:52.088320 | orchestrator | 2026-04-09 01:25:52.088326 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-09 01:25:52.088331 | orchestrator | Thursday 09 April 2026 01:25:51 +0000 (0:00:24.214) 0:04:18.930 ******** 2026-04-09 01:25:52.088335 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-09 01:25:52.088340 | orchestrator |  "msg": "test: 192.168.112.185" 2026-04-09 01:25:52.088346 | orchestrator | } 2026-04-09 01:25:52.088351 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-09 01:25:52.088357 | orchestrator |  "msg": "test-1: 192.168.112.143" 2026-04-09 01:25:52.088361 | orchestrator | } 2026-04-09 01:25:52.088366 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-09 01:25:52.088371 | orchestrator |  "msg": "test-2: 192.168.112.103" 2026-04-09 01:25:52.088376 | orchestrator | } 2026-04-09 01:25:52.088380 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-09 01:25:52.088385 | orchestrator |  "msg": "test-3: 192.168.112.109" 2026-04-09 01:25:52.088391 | orchestrator | } 2026-04-09 01:25:52.088396 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-09 01:25:52.088401 | orchestrator |  "msg": "test-4: 192.168.112.175" 2026-04-09 01:25:52.088406 | orchestrator | } 2026-04-09 01:25:52.088412 | orchestrator | 2026-04-09 01:25:52.088417 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:25:52.088422 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 01:25:52.088428 | orchestrator | 2026-04-09 01:25:52.088434 | orchestrator | 2026-04-09 01:25:52.088439 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:25:52.088444 | orchestrator | Thursday 09 April 2026 01:25:51 +0000 (0:00:00.116) 0:04:19.047 ******** 2026-04-09 01:25:52.088449 | orchestrator | =============================================================================== 2026-04-09 01:25:52.088452 | orchestrator | Wait for instance creation to complete --------------------------------- 57.54s 2026-04-09 01:25:52.088455 | orchestrator | Create test routers ---------------------------------------------------- 30.92s 2026-04-09 01:25:52.088458 | orchestrator | Create floating ip addresses ------------------------------------------- 24.21s 2026-04-09 01:25:52.088461 | orchestrator | Create test subnets ---------------------------------------------------- 16.06s 2026-04-09 01:25:52.088464 | orchestrator | Attach test volume ----------------------------------------------------- 13.70s 2026-04-09 01:25:52.088467 | orchestrator | Create test networks --------------------------------------------------- 13.11s 2026-04-09 01:25:52.088470 | orchestrator | Add member roles to user test ------------------------------------------ 11.70s 2026-04-09 01:25:52.088473 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.07s 2026-04-09 01:25:52.088477 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.44s 2026-04-09 01:25:52.088480 | orchestrator | Create test volume ------------------------------------------------------ 6.63s 2026-04-09 01:25:52.088483 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.40s 2026-04-09 01:25:52.088490 | orchestrator | Add tag to instances ---------------------------------------------------- 4.66s 2026-04-09 01:25:52.088493 | orchestrator | Create ssh security group ----------------------------------------------- 4.54s 2026-04-09 01:25:52.088496 | orchestrator | Add metadata to instances ----------------------------------------------- 4.50s 2026-04-09 01:25:52.088499 | orchestrator | Create test server group ------------------------------------------------ 4.49s 2026-04-09 01:25:52.088502 | orchestrator | Create test instances --------------------------------------------------- 4.44s 2026-04-09 01:25:52.088505 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.32s 2026-04-09 01:25:52.088508 | orchestrator | Create test-admin user -------------------------------------------------- 4.27s 2026-04-09 01:25:52.088511 | orchestrator | Create test project ----------------------------------------------------- 4.23s 2026-04-09 01:25:52.088514 | orchestrator | Create test user -------------------------------------------------------- 4.08s 2026-04-09 01:25:52.290078 | orchestrator | + server_list 2026-04-09 01:25:52.290127 | orchestrator | + openstack --os-cloud test server list 2026-04-09 01:25:55.831517 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 01:25:55.831622 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-09 01:25:55.831632 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 01:25:55.831637 | orchestrator | | f3c8cd64-430f-4006-9746-b1f85c50090f | test-4 | ACTIVE | test-3=192.168.112.175, 192.168.202.101 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:25:55.831641 | orchestrator | | 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 | test-3 | ACTIVE | test-2=192.168.112.109, 192.168.201.55 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:25:55.831645 | orchestrator | | da955dc3-e2dc-40d0-8b0d-45d177d70f7f | test-2 | ACTIVE | test-2=192.168.112.103, 192.168.201.218 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:25:55.831649 | orchestrator | | 8be5dc37-febe-4e41-8648-e76027e991a7 | test-1 | ACTIVE | test-1=192.168.112.143, 192.168.200.167 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:25:55.831653 | orchestrator | | b8ca9ad9-9664-4929-bd79-74c25be32b8c | test | ACTIVE | test-1=192.168.112.185, 192.168.200.107 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:25:55.831657 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 01:25:56.071710 | orchestrator | + openstack --os-cloud test server show test 2026-04-09 01:25:59.492564 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:25:59.492632 | orchestrator | | Field | Value | 2026-04-09 01:25:59.492639 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:25:59.492644 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:25:59.492660 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:25:59.492664 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:25:59.492668 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-09 01:25:59.492672 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:25:59.492677 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:25:59.492690 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:25:59.492694 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:25:59.492698 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:25:59.492702 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:25:59.492713 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:25:59.492718 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:25:59.492721 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:25:59.492725 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:25:59.492731 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:25:59.492735 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:24:11.000000 | 2026-04-09 01:25:59.492742 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:25:59.492746 | orchestrator | | accessIPv4 | | 2026-04-09 01:25:59.492750 | orchestrator | | accessIPv6 | | 2026-04-09 01:25:59.492759 | orchestrator | | addresses | test-1=192.168.112.185, 192.168.200.107 | 2026-04-09 01:25:59.492763 | orchestrator | | config_drive | | 2026-04-09 01:25:59.492767 | orchestrator | | created | 2026-04-09T01:23:45Z | 2026-04-09 01:25:59.492771 | orchestrator | | description | None | 2026-04-09 01:25:59.492775 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:25:59.492802 | orchestrator | | hostId | d8754f923292b156dac01a9acba6902831254a6e4370b945605360b5 | 2026-04-09 01:25:59.492807 | orchestrator | | host_status | None | 2026-04-09 01:25:59.492816 | orchestrator | | id | b8ca9ad9-9664-4929-bd79-74c25be32b8c | 2026-04-09 01:25:59.492820 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:25:59.492827 | orchestrator | | key_name | test | 2026-04-09 01:25:59.492831 | orchestrator | | locked | False | 2026-04-09 01:25:59.492835 | orchestrator | | locked_reason | None | 2026-04-09 01:25:59.492838 | orchestrator | | name | test | 2026-04-09 01:25:59.492842 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:25:59.492848 | orchestrator | | progress | 0 | 2026-04-09 01:25:59.492852 | orchestrator | | project_id | d58b1e6d443f4b39846dc4b10d757807 | 2026-04-09 01:25:59.492856 | orchestrator | | properties | hostname='test' | 2026-04-09 01:25:59.492864 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:25:59.492868 | orchestrator | | | name='icmp' | 2026-04-09 01:25:59.492881 | orchestrator | | server_groups | None | 2026-04-09 01:25:59.492885 | orchestrator | | status | ACTIVE | 2026-04-09 01:25:59.492889 | orchestrator | | tags | test | 2026-04-09 01:25:59.492893 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:25:59.492897 | orchestrator | | updated | 2026-04-09T01:24:44Z | 2026-04-09 01:25:59.492903 | orchestrator | | user_id | c811549098b8438ba77bd4a5b2c4cf11 | 2026-04-09 01:25:59.492907 | orchestrator | | volumes_attached | delete_on_termination='True', id='4024657f-07f4-4c22-bd79-de8a003374cd' | 2026-04-09 01:25:59.492911 | orchestrator | | | delete_on_termination='False', id='a33eb1bc-a441-462c-b74a-964c818ce7d3' | 2026-04-09 01:25:59.497044 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:25:59.758483 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-09 01:26:02.777562 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:02.777649 | orchestrator | | Field | Value | 2026-04-09 01:26:02.777657 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:02.777663 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:26:02.777667 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:26:02.777672 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:26:02.777689 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-09 01:26:02.777694 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:26:02.777699 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:26:02.777734 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:26:02.777742 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:26:02.777749 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:26:02.777755 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:26:02.777762 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:26:02.777769 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:26:02.777775 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:26:02.777782 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:26:02.777789 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:26:02.777802 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:24:11.000000 | 2026-04-09 01:26:02.777818 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:26:02.777826 | orchestrator | | accessIPv4 | | 2026-04-09 01:26:02.777832 | orchestrator | | accessIPv6 | | 2026-04-09 01:26:02.777839 | orchestrator | | addresses | test-1=192.168.112.143, 192.168.200.167 | 2026-04-09 01:26:02.777845 | orchestrator | | config_drive | | 2026-04-09 01:26:02.777852 | orchestrator | | created | 2026-04-09T01:23:45Z | 2026-04-09 01:26:02.777858 | orchestrator | | description | None | 2026-04-09 01:26:02.777869 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:26:02.777879 | orchestrator | | hostId | d8754f923292b156dac01a9acba6902831254a6e4370b945605360b5 | 2026-04-09 01:26:02.777883 | orchestrator | | host_status | None | 2026-04-09 01:26:02.777893 | orchestrator | | id | 8be5dc37-febe-4e41-8648-e76027e991a7 | 2026-04-09 01:26:02.777898 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:26:02.777901 | orchestrator | | key_name | test | 2026-04-09 01:26:02.777905 | orchestrator | | locked | False | 2026-04-09 01:26:02.777909 | orchestrator | | locked_reason | None | 2026-04-09 01:26:02.777913 | orchestrator | | name | test-1 | 2026-04-09 01:26:02.777917 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:26:02.777923 | orchestrator | | progress | 0 | 2026-04-09 01:26:02.777935 | orchestrator | | project_id | d58b1e6d443f4b39846dc4b10d757807 | 2026-04-09 01:26:02.777939 | orchestrator | | properties | hostname='test-1' | 2026-04-09 01:26:02.777947 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:26:02.777951 | orchestrator | | | name='icmp' | 2026-04-09 01:26:02.777955 | orchestrator | | server_groups | None | 2026-04-09 01:26:02.777958 | orchestrator | | status | ACTIVE | 2026-04-09 01:26:02.777962 | orchestrator | | tags | test | 2026-04-09 01:26:02.777966 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:26:02.777970 | orchestrator | | updated | 2026-04-09T01:24:45Z | 2026-04-09 01:26:02.777981 | orchestrator | | user_id | c811549098b8438ba77bd4a5b2c4cf11 | 2026-04-09 01:26:02.777987 | orchestrator | | volumes_attached | delete_on_termination='True', id='bad14a02-3b15-45de-91e0-c43f8e54b4f4' | 2026-04-09 01:26:02.784588 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:03.105822 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-09 01:26:06.137154 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:06.137284 | orchestrator | | Field | Value | 2026-04-09 01:26:06.137297 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:06.137302 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:26:06.137306 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:26:06.137310 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:26:06.137343 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-09 01:26:06.137347 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:26:06.137351 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:26:06.137368 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:26:06.137372 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:26:06.137376 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:26:06.137380 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:26:06.137384 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:26:06.137388 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:26:06.137396 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:26:06.137402 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:26:06.137407 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:26:06.137411 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:24:13.000000 | 2026-04-09 01:26:06.137418 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:26:06.137422 | orchestrator | | accessIPv4 | | 2026-04-09 01:26:06.137426 | orchestrator | | accessIPv6 | | 2026-04-09 01:26:06.137430 | orchestrator | | addresses | test-2=192.168.112.103, 192.168.201.218 | 2026-04-09 01:26:06.137434 | orchestrator | | config_drive | | 2026-04-09 01:26:06.137441 | orchestrator | | created | 2026-04-09T01:23:46Z | 2026-04-09 01:26:06.137445 | orchestrator | | description | None | 2026-04-09 01:26:06.137449 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:26:06.137453 | orchestrator | | hostId | 66b83f40552e2667e2f3e86d87e01f07806227599a231c79993524ac | 2026-04-09 01:26:06.137456 | orchestrator | | host_status | None | 2026-04-09 01:26:06.137465 | orchestrator | | id | da955dc3-e2dc-40d0-8b0d-45d177d70f7f | 2026-04-09 01:26:06.137469 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:26:06.137473 | orchestrator | | key_name | test | 2026-04-09 01:26:06.137477 | orchestrator | | locked | False | 2026-04-09 01:26:06.137486 | orchestrator | | locked_reason | None | 2026-04-09 01:26:06.137493 | orchestrator | | name | test-2 | 2026-04-09 01:26:06.137497 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:26:06.137503 | orchestrator | | progress | 0 | 2026-04-09 01:26:06.137519 | orchestrator | | project_id | d58b1e6d443f4b39846dc4b10d757807 | 2026-04-09 01:26:06.137530 | orchestrator | | properties | hostname='test-2' | 2026-04-09 01:26:06.137544 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:26:06.137553 | orchestrator | | | name='icmp' | 2026-04-09 01:26:06.137559 | orchestrator | | server_groups | None | 2026-04-09 01:26:06.137565 | orchestrator | | status | ACTIVE | 2026-04-09 01:26:06.137576 | orchestrator | | tags | test | 2026-04-09 01:26:06.137582 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:26:06.137589 | orchestrator | | updated | 2026-04-09T01:24:45Z | 2026-04-09 01:26:06.137599 | orchestrator | | user_id | c811549098b8438ba77bd4a5b2c4cf11 | 2026-04-09 01:26:06.137606 | orchestrator | | volumes_attached | delete_on_termination='True', id='ba50435f-4524-4c63-bbd1-7ca24855159b' | 2026-04-09 01:26:06.141530 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:06.361332 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-09 01:26:09.153549 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:09.153625 | orchestrator | | Field | Value | 2026-04-09 01:26:09.153634 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:09.153661 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:26:09.153668 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:26:09.153674 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:26:09.153678 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-09 01:26:09.153696 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:26:09.153702 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:26:09.153722 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:26:09.153728 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:26:09.153734 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:26:09.153747 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:26:09.153755 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:26:09.153762 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:26:09.153768 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:26:09.153774 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:26:09.153785 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:26:09.153792 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:24:13.000000 | 2026-04-09 01:26:09.153802 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:26:09.153808 | orchestrator | | accessIPv4 | | 2026-04-09 01:26:09.153820 | orchestrator | | accessIPv6 | | 2026-04-09 01:26:09.153826 | orchestrator | | addresses | test-2=192.168.112.109, 192.168.201.55 | 2026-04-09 01:26:09.153833 | orchestrator | | config_drive | | 2026-04-09 01:26:09.153838 | orchestrator | | created | 2026-04-09T01:23:46Z | 2026-04-09 01:26:09.153845 | orchestrator | | description | None | 2026-04-09 01:26:09.153851 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:26:09.153858 | orchestrator | | hostId | 66b83f40552e2667e2f3e86d87e01f07806227599a231c79993524ac | 2026-04-09 01:26:09.153864 | orchestrator | | host_status | None | 2026-04-09 01:26:09.153879 | orchestrator | | id | 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 | 2026-04-09 01:26:09.153888 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:26:09.153901 | orchestrator | | key_name | test | 2026-04-09 01:26:09.153907 | orchestrator | | locked | False | 2026-04-09 01:26:09.153913 | orchestrator | | locked_reason | None | 2026-04-09 01:26:09.153919 | orchestrator | | name | test-3 | 2026-04-09 01:26:09.153925 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:26:09.153935 | orchestrator | | progress | 0 | 2026-04-09 01:26:09.153940 | orchestrator | | project_id | d58b1e6d443f4b39846dc4b10d757807 | 2026-04-09 01:26:09.153947 | orchestrator | | properties | hostname='test-3' | 2026-04-09 01:26:09.153958 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:26:09.153968 | orchestrator | | | name='icmp' | 2026-04-09 01:26:09.153975 | orchestrator | | server_groups | None | 2026-04-09 01:26:09.153982 | orchestrator | | status | ACTIVE | 2026-04-09 01:26:09.153988 | orchestrator | | tags | test | 2026-04-09 01:26:09.153995 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:26:09.154001 | orchestrator | | updated | 2026-04-09T01:24:46Z | 2026-04-09 01:26:09.154008 | orchestrator | | user_id | c811549098b8438ba77bd4a5b2c4cf11 | 2026-04-09 01:26:09.154063 | orchestrator | | volumes_attached | delete_on_termination='True', id='1d467399-f7e1-42c6-a140-9e804b97d044' | 2026-04-09 01:26:09.157884 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:09.389029 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-09 01:26:12.342933 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:12.343027 | orchestrator | | Field | Value | 2026-04-09 01:26:12.343038 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:12.343045 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:26:12.343051 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:26:12.343057 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:26:12.343398 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-09 01:26:12.343779 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:26:12.343803 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:26:12.343840 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:26:12.343846 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:26:12.343851 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:26:12.343856 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:26:12.343861 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:26:12.343870 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:26:12.343875 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:26:12.343880 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:26:12.343885 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:26:12.343890 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:24:14.000000 | 2026-04-09 01:26:12.343903 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:26:12.343907 | orchestrator | | accessIPv4 | | 2026-04-09 01:26:12.343911 | orchestrator | | accessIPv6 | | 2026-04-09 01:26:12.343915 | orchestrator | | addresses | test-3=192.168.112.175, 192.168.202.101 | 2026-04-09 01:26:12.343919 | orchestrator | | config_drive | | 2026-04-09 01:26:12.343925 | orchestrator | | created | 2026-04-09T01:23:47Z | 2026-04-09 01:26:12.343929 | orchestrator | | description | None | 2026-04-09 01:26:12.343949 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:26:12.343960 | orchestrator | | hostId | a965a7dcee192282834e80a8a230247d833e7a75db530ca791bbf620 | 2026-04-09 01:26:12.343968 | orchestrator | | host_status | None | 2026-04-09 01:26:12.343976 | orchestrator | | id | f3c8cd64-430f-4006-9746-b1f85c50090f | 2026-04-09 01:26:12.343980 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:26:12.343984 | orchestrator | | key_name | test | 2026-04-09 01:26:12.343988 | orchestrator | | locked | False | 2026-04-09 01:26:12.343992 | orchestrator | | locked_reason | None | 2026-04-09 01:26:12.343998 | orchestrator | | name | test-4 | 2026-04-09 01:26:12.344002 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:26:12.344006 | orchestrator | | progress | 0 | 2026-04-09 01:26:12.344013 | orchestrator | | project_id | d58b1e6d443f4b39846dc4b10d757807 | 2026-04-09 01:26:12.344017 | orchestrator | | properties | hostname='test-4' | 2026-04-09 01:26:12.344026 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:26:12.344030 | orchestrator | | | name='icmp' | 2026-04-09 01:26:12.344034 | orchestrator | | server_groups | None | 2026-04-09 01:26:12.344038 | orchestrator | | status | ACTIVE | 2026-04-09 01:26:12.344042 | orchestrator | | tags | test | 2026-04-09 01:26:12.344048 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:26:12.344052 | orchestrator | | updated | 2026-04-09T01:24:46Z | 2026-04-09 01:26:12.344061 | orchestrator | | user_id | c811549098b8438ba77bd4a5b2c4cf11 | 2026-04-09 01:26:12.344065 | orchestrator | | volumes_attached | delete_on_termination='True', id='85ee27bf-4adf-4d0e-8684-1dede5321b7f' | 2026-04-09 01:26:12.349317 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:26:12.629159 | orchestrator | + server_ping 2026-04-09 01:26:12.630241 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:26:12.630322 | orchestrator | ++ tr -d '\r' 2026-04-09 01:26:15.379393 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:15.379438 | orchestrator | + ping -c3 192.168.112.143 2026-04-09 01:26:15.388956 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-04-09 01:26:15.389006 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=4.40 ms 2026-04-09 01:26:16.388534 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.05 ms 2026-04-09 01:26:17.389516 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.67 ms 2026-04-09 01:26:17.389694 | orchestrator | 2026-04-09 01:26:17.390339 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-04-09 01:26:17.390420 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:26:17.390431 | orchestrator | rtt min/avg/max/mdev = 1.667/2.703/4.398/1.208 ms 2026-04-09 01:26:17.390452 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:17.390459 | orchestrator | + ping -c3 192.168.112.109 2026-04-09 01:26:17.405603 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-09 01:26:17.405675 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=10.8 ms 2026-04-09 01:26:18.397616 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.42 ms 2026-04-09 01:26:19.399345 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.03 ms 2026-04-09 01:26:19.399438 | orchestrator | 2026-04-09 01:26:19.399452 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-09 01:26:19.399461 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:26:19.399467 | orchestrator | rtt min/avg/max/mdev = 2.025/5.098/10.847/4.068 ms 2026-04-09 01:26:19.400186 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:19.400248 | orchestrator | + ping -c3 192.168.112.185 2026-04-09 01:26:19.409340 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-09 01:26:19.409410 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=4.82 ms 2026-04-09 01:26:20.408817 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.05 ms 2026-04-09 01:26:21.409932 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.35 ms 2026-04-09 01:26:21.409984 | orchestrator | 2026-04-09 01:26:21.409991 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-09 01:26:21.409999 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:26:21.410055 | orchestrator | rtt min/avg/max/mdev = 1.351/2.737/4.816/1.496 ms 2026-04-09 01:26:21.410064 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:21.410071 | orchestrator | + ping -c3 192.168.112.103 2026-04-09 01:26:21.420084 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-09 01:26:21.420154 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=5.45 ms 2026-04-09 01:26:22.417958 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=1.58 ms 2026-04-09 01:26:23.419100 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.23 ms 2026-04-09 01:26:23.419151 | orchestrator | 2026-04-09 01:26:23.419165 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-09 01:26:23.419170 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:26:23.419174 | orchestrator | rtt min/avg/max/mdev = 1.230/2.751/5.447/1.911 ms 2026-04-09 01:26:23.419178 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:23.419183 | orchestrator | + ping -c3 192.168.112.175 2026-04-09 01:26:23.427862 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2026-04-09 01:26:23.427906 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=3.55 ms 2026-04-09 01:26:24.427901 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=1.55 ms 2026-04-09 01:26:25.429847 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.19 ms 2026-04-09 01:26:25.429910 | orchestrator | 2026-04-09 01:26:25.429918 | orchestrator | --- 192.168.112.175 ping statistics --- 2026-04-09 01:26:25.429923 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-09 01:26:25.429928 | orchestrator | rtt min/avg/max/mdev = 1.188/2.095/3.550/1.039 ms 2026-04-09 01:26:25.430708 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 01:26:25.430730 | orchestrator | + compute_list 2026-04-09 01:26:25.430735 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:26:27.043612 | orchestrator | 2026-04-09 01:26:27 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:27.043705 | orchestrator | 2026-04-09 01:26:27 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:27.043718 | orchestrator | 2026-04-09 01:26:27 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:30.542009 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:30.542167 | orchestrator | | ID | Name | Status | 2026-04-09 01:26:30.542186 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:26:30.542190 | orchestrator | | 8be5dc37-febe-4e41-8648-e76027e991a7 | test-1 | ACTIVE | 2026-04-09 01:26:30.542195 | orchestrator | | b8ca9ad9-9664-4929-bd79-74c25be32b8c | test | ACTIVE | 2026-04-09 01:26:30.542200 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:30.829337 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:26:32.494129 | orchestrator | 2026-04-09 01:26:32 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:32.494212 | orchestrator | 2026-04-09 01:26:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:32.494222 | orchestrator | 2026-04-09 01:26:32 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:34.076572 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:34.076678 | orchestrator | | ID | Name | Status | 2026-04-09 01:26:34.076689 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:26:34.076696 | orchestrator | | f3c8cd64-430f-4006-9746-b1f85c50090f | test-4 | ACTIVE | 2026-04-09 01:26:34.076702 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:34.424463 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:26:36.036391 | orchestrator | 2026-04-09 01:26:36 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:36.036484 | orchestrator | 2026-04-09 01:26:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:36.036493 | orchestrator | 2026-04-09 01:26:36 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:37.635117 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:37.635206 | orchestrator | | ID | Name | Status | 2026-04-09 01:26:37.635215 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:26:37.635222 | orchestrator | | 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 | test-3 | ACTIVE | 2026-04-09 01:26:37.635230 | orchestrator | | da955dc3-e2dc-40d0-8b0d-45d177d70f7f | test-2 | ACTIVE | 2026-04-09 01:26:37.635237 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:38.067527 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-09 01:26:39.691229 | orchestrator | 2026-04-09 01:26:39 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:39.691329 | orchestrator | 2026-04-09 01:26:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:39.691338 | orchestrator | 2026-04-09 01:26:39 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:41.191967 | orchestrator | 2026-04-09 01:26:41 | INFO  | Live migrating server f3c8cd64-430f-4006-9746-b1f85c50090f 2026-04-09 01:26:53.528482 | orchestrator | 2026-04-09 01:26:53 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:26:55.971838 | orchestrator | 2026-04-09 01:26:55 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:26:58.406497 | orchestrator | 2026-04-09 01:26:58 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:27:00.759868 | orchestrator | 2026-04-09 01:27:00 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:27:02.991180 | orchestrator | 2026-04-09 01:27:02 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:27:05.225381 | orchestrator | 2026-04-09 01:27:05 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:27:07.621030 | orchestrator | 2026-04-09 01:27:07 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:27:10.009764 | orchestrator | 2026-04-09 01:27:10 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:27:12.319481 | orchestrator | 2026-04-09 01:27:12 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) completed with status ACTIVE 2026-04-09 01:27:12.599948 | orchestrator | + compute_list 2026-04-09 01:27:12.600030 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:27:14.154217 | orchestrator | 2026-04-09 01:27:14 | ERROR  | Unable to get ansible vault password 2026-04-09 01:27:14.154367 | orchestrator | 2026-04-09 01:27:14 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:27:14.154378 | orchestrator | 2026-04-09 01:27:14 | ERROR  | Dropping encrypted entries 2026-04-09 01:27:15.665634 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:27:15.665700 | orchestrator | | ID | Name | Status | 2026-04-09 01:27:15.665709 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:27:15.665716 | orchestrator | | f3c8cd64-430f-4006-9746-b1f85c50090f | test-4 | ACTIVE | 2026-04-09 01:27:15.665722 | orchestrator | | 8be5dc37-febe-4e41-8648-e76027e991a7 | test-1 | ACTIVE | 2026-04-09 01:27:15.665747 | orchestrator | | b8ca9ad9-9664-4929-bd79-74c25be32b8c | test | ACTIVE | 2026-04-09 01:27:15.665754 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:27:15.935954 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:27:17.528017 | orchestrator | 2026-04-09 01:27:17 | ERROR  | Unable to get ansible vault password 2026-04-09 01:27:17.528103 | orchestrator | 2026-04-09 01:27:17 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:27:17.528115 | orchestrator | 2026-04-09 01:27:17 | ERROR  | Dropping encrypted entries 2026-04-09 01:27:18.770613 | orchestrator | +------+--------+----------+ 2026-04-09 01:27:18.770712 | orchestrator | | ID | Name | Status | 2026-04-09 01:27:18.770723 | orchestrator | |------+--------+----------| 2026-04-09 01:27:18.770730 | orchestrator | +------+--------+----------+ 2026-04-09 01:27:19.054756 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:27:20.616086 | orchestrator | 2026-04-09 01:27:20 | ERROR  | Unable to get ansible vault password 2026-04-09 01:27:20.616174 | orchestrator | 2026-04-09 01:27:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:27:20.616186 | orchestrator | 2026-04-09 01:27:20 | ERROR  | Dropping encrypted entries 2026-04-09 01:27:22.194326 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:27:22.194382 | orchestrator | | ID | Name | Status | 2026-04-09 01:27:22.194388 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:27:22.194392 | orchestrator | | 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 | test-3 | ACTIVE | 2026-04-09 01:27:22.194396 | orchestrator | | da955dc3-e2dc-40d0-8b0d-45d177d70f7f | test-2 | ACTIVE | 2026-04-09 01:27:22.194400 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:27:22.491179 | orchestrator | + server_ping 2026-04-09 01:27:22.492722 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:27:22.492935 | orchestrator | ++ tr -d '\r' 2026-04-09 01:27:25.114715 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:25.114773 | orchestrator | + ping -c3 192.168.112.143 2026-04-09 01:27:25.122761 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-04-09 01:27:25.122819 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=4.89 ms 2026-04-09 01:27:26.120222 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=1.70 ms 2026-04-09 01:27:27.121879 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.55 ms 2026-04-09 01:27:27.121923 | orchestrator | 2026-04-09 01:27:27.121932 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-04-09 01:27:27.121940 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:27:27.121947 | orchestrator | rtt min/avg/max/mdev = 1.548/2.712/4.888/1.539 ms 2026-04-09 01:27:27.122850 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:27.122910 | orchestrator | + ping -c3 192.168.112.109 2026-04-09 01:27:27.137161 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-09 01:27:27.137234 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=9.36 ms 2026-04-09 01:27:28.132012 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.52 ms 2026-04-09 01:27:29.133573 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.80 ms 2026-04-09 01:27:29.133672 | orchestrator | 2026-04-09 01:27:29.133686 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-09 01:27:29.133698 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:27:29.133726 | orchestrator | rtt min/avg/max/mdev = 1.799/4.559/9.364/3.409 ms 2026-04-09 01:27:29.134090 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:29.134145 | orchestrator | + ping -c3 192.168.112.185 2026-04-09 01:27:29.146345 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-09 01:27:29.146417 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=6.46 ms 2026-04-09 01:27:30.143900 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.16 ms 2026-04-09 01:27:31.143936 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.54 ms 2026-04-09 01:27:31.144033 | orchestrator | 2026-04-09 01:27:31.144043 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-09 01:27:31.144051 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:27:31.144058 | orchestrator | rtt min/avg/max/mdev = 1.540/3.388/6.464/2.189 ms 2026-04-09 01:27:31.146205 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:31.146273 | orchestrator | + ping -c3 192.168.112.103 2026-04-09 01:27:31.160156 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-09 01:27:31.160242 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=8.14 ms 2026-04-09 01:27:32.156191 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.60 ms 2026-04-09 01:27:33.156695 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.74 ms 2026-04-09 01:27:33.156784 | orchestrator | 2026-04-09 01:27:33.156795 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-09 01:27:33.156803 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:27:33.156810 | orchestrator | rtt min/avg/max/mdev = 1.741/4.158/8.138/2.835 ms 2026-04-09 01:27:33.156826 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:33.156834 | orchestrator | + ping -c3 192.168.112.175 2026-04-09 01:27:33.174102 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2026-04-09 01:27:33.174185 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=12.5 ms 2026-04-09 01:27:34.165709 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.14 ms 2026-04-09 01:27:35.166870 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-09 01:27:35.166958 | orchestrator | 2026-04-09 01:27:35.166965 | orchestrator | --- 192.168.112.175 ping statistics --- 2026-04-09 01:27:35.166971 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:27:35.166976 | orchestrator | rtt min/avg/max/mdev = 1.841/5.489/12.492/4.952 ms 2026-04-09 01:27:35.167259 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-09 01:27:36.797392 | orchestrator | 2026-04-09 01:27:36 | ERROR  | Unable to get ansible vault password 2026-04-09 01:27:36.797453 | orchestrator | 2026-04-09 01:27:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:27:36.797463 | orchestrator | 2026-04-09 01:27:36 | ERROR  | Dropping encrypted entries 2026-04-09 01:27:38.048873 | orchestrator | 2026-04-09 01:27:38 | INFO  | Live migrating server 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 2026-04-09 01:27:49.043381 | orchestrator | 2026-04-09 01:27:49 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:27:51.373539 | orchestrator | 2026-04-09 01:27:51 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:27:53.737078 | orchestrator | 2026-04-09 01:27:53 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:27:56.098622 | orchestrator | 2026-04-09 01:27:56 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:27:58.413670 | orchestrator | 2026-04-09 01:27:58 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:28:00.840531 | orchestrator | 2026-04-09 01:28:00 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:28:03.152142 | orchestrator | 2026-04-09 01:28:03 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:28:05.422842 | orchestrator | 2026-04-09 01:28:05 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:28:07.709109 | orchestrator | 2026-04-09 01:28:07 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) completed with status ACTIVE 2026-04-09 01:28:07.709165 | orchestrator | 2026-04-09 01:28:07 | INFO  | Live migrating server da955dc3-e2dc-40d0-8b0d-45d177d70f7f 2026-04-09 01:28:20.304908 | orchestrator | 2026-04-09 01:28:20 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:22.665672 | orchestrator | 2026-04-09 01:28:22 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:25.006539 | orchestrator | 2026-04-09 01:28:25 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:27.251178 | orchestrator | 2026-04-09 01:28:27 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:29.536801 | orchestrator | 2026-04-09 01:28:29 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:31.827276 | orchestrator | 2026-04-09 01:28:31 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:34.130263 | orchestrator | 2026-04-09 01:28:34 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:36.352636 | orchestrator | 2026-04-09 01:28:36 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:38.581025 | orchestrator | 2026-04-09 01:28:38 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:28:41.010684 | orchestrator | 2026-04-09 01:28:41 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) completed with status ACTIVE 2026-04-09 01:28:41.301120 | orchestrator | + compute_list 2026-04-09 01:28:41.301221 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:28:42.800296 | orchestrator | 2026-04-09 01:28:42 | ERROR  | Unable to get ansible vault password 2026-04-09 01:28:42.800382 | orchestrator | 2026-04-09 01:28:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:28:42.800394 | orchestrator | 2026-04-09 01:28:42 | ERROR  | Dropping encrypted entries 2026-04-09 01:28:44.384633 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:28:44.384718 | orchestrator | | ID | Name | Status | 2026-04-09 01:28:44.384724 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:28:44.384729 | orchestrator | | f3c8cd64-430f-4006-9746-b1f85c50090f | test-4 | ACTIVE | 2026-04-09 01:28:44.384733 | orchestrator | | 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 | test-3 | ACTIVE | 2026-04-09 01:28:44.384737 | orchestrator | | da955dc3-e2dc-40d0-8b0d-45d177d70f7f | test-2 | ACTIVE | 2026-04-09 01:28:44.384741 | orchestrator | | 8be5dc37-febe-4e41-8648-e76027e991a7 | test-1 | ACTIVE | 2026-04-09 01:28:44.384746 | orchestrator | | b8ca9ad9-9664-4929-bd79-74c25be32b8c | test | ACTIVE | 2026-04-09 01:28:44.384750 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:28:44.668560 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:28:46.245287 | orchestrator | 2026-04-09 01:28:46 | ERROR  | Unable to get ansible vault password 2026-04-09 01:28:46.245397 | orchestrator | 2026-04-09 01:28:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:28:46.245423 | orchestrator | 2026-04-09 01:28:46 | ERROR  | Dropping encrypted entries 2026-04-09 01:28:47.471752 | orchestrator | +------+--------+----------+ 2026-04-09 01:28:47.471842 | orchestrator | | ID | Name | Status | 2026-04-09 01:28:47.471854 | orchestrator | |------+--------+----------| 2026-04-09 01:28:47.471861 | orchestrator | +------+--------+----------+ 2026-04-09 01:28:47.779982 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:28:49.359513 | orchestrator | 2026-04-09 01:28:49 | ERROR  | Unable to get ansible vault password 2026-04-09 01:28:49.359606 | orchestrator | 2026-04-09 01:28:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:28:49.359619 | orchestrator | 2026-04-09 01:28:49 | ERROR  | Dropping encrypted entries 2026-04-09 01:28:50.565491 | orchestrator | +------+--------+----------+ 2026-04-09 01:28:50.565563 | orchestrator | | ID | Name | Status | 2026-04-09 01:28:50.565570 | orchestrator | |------+--------+----------| 2026-04-09 01:28:50.565574 | orchestrator | +------+--------+----------+ 2026-04-09 01:28:50.880594 | orchestrator | + server_ping 2026-04-09 01:28:50.880765 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:28:50.881239 | orchestrator | ++ tr -d '\r' 2026-04-09 01:28:53.577695 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:28:53.577763 | orchestrator | + ping -c3 192.168.112.143 2026-04-09 01:28:53.586658 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-04-09 01:28:53.586746 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=5.29 ms 2026-04-09 01:28:54.585036 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.06 ms 2026-04-09 01:28:55.585498 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.48 ms 2026-04-09 01:28:55.585580 | orchestrator | 2026-04-09 01:28:55.585592 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-04-09 01:28:55.585600 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:28:55.585607 | orchestrator | rtt min/avg/max/mdev = 1.475/2.941/5.288/1.676 ms 2026-04-09 01:28:55.586366 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:28:55.586386 | orchestrator | + ping -c3 192.168.112.109 2026-04-09 01:28:55.595980 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-09 01:28:55.596068 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.35 ms 2026-04-09 01:28:56.592681 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-09 01:28:57.593432 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.67 ms 2026-04-09 01:28:57.593507 | orchestrator | 2026-04-09 01:28:57.593516 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-09 01:28:57.593524 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:28:57.593530 | orchestrator | rtt min/avg/max/mdev = 1.673/3.787/7.349/2.532 ms 2026-04-09 01:28:57.594750 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:28:57.594773 | orchestrator | + ping -c3 192.168.112.185 2026-04-09 01:28:57.605956 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-09 01:28:57.606063 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=6.90 ms 2026-04-09 01:28:58.602766 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.18 ms 2026-04-09 01:28:59.603748 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.47 ms 2026-04-09 01:28:59.603837 | orchestrator | 2026-04-09 01:28:59.603848 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-09 01:28:59.603855 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:28:59.603876 | orchestrator | rtt min/avg/max/mdev = 1.467/3.517/6.904/2.412 ms 2026-04-09 01:28:59.604176 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:28:59.604217 | orchestrator | + ping -c3 192.168.112.103 2026-04-09 01:28:59.613176 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-09 01:28:59.613244 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=6.28 ms 2026-04-09 01:29:00.610418 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=1.70 ms 2026-04-09 01:29:01.612684 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.50 ms 2026-04-09 01:29:01.612765 | orchestrator | 2026-04-09 01:29:01.612774 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-09 01:29:01.612782 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-09 01:29:01.612787 | orchestrator | rtt min/avg/max/mdev = 1.504/3.161/6.278/2.205 ms 2026-04-09 01:29:01.612804 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:29:01.612816 | orchestrator | + ping -c3 192.168.112.175 2026-04-09 01:29:01.625830 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2026-04-09 01:29:01.625905 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.66 ms 2026-04-09 01:29:02.623236 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.11 ms 2026-04-09 01:29:03.623628 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.52 ms 2026-04-09 01:29:03.623713 | orchestrator | 2026-04-09 01:29:03.623720 | orchestrator | --- 192.168.112.175 ping statistics --- 2026-04-09 01:29:03.623726 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:29:03.623731 | orchestrator | rtt min/avg/max/mdev = 1.523/3.433/6.664/2.297 ms 2026-04-09 01:29:03.624840 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-09 01:29:05.211727 | orchestrator | 2026-04-09 01:29:05 | ERROR  | Unable to get ansible vault password 2026-04-09 01:29:05.211798 | orchestrator | 2026-04-09 01:29:05 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:29:05.211809 | orchestrator | 2026-04-09 01:29:05 | ERROR  | Dropping encrypted entries 2026-04-09 01:29:06.855710 | orchestrator | 2026-04-09 01:29:06 | INFO  | Live migrating server f3c8cd64-430f-4006-9746-b1f85c50090f 2026-04-09 01:29:18.172192 | orchestrator | 2026-04-09 01:29:18 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:20.538216 | orchestrator | 2026-04-09 01:29:20 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:22.895122 | orchestrator | 2026-04-09 01:29:22 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:25.266926 | orchestrator | 2026-04-09 01:29:25 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:27.570962 | orchestrator | 2026-04-09 01:29:27 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:29.978894 | orchestrator | 2026-04-09 01:29:29 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:32.404926 | orchestrator | 2026-04-09 01:29:32 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:34.790166 | orchestrator | 2026-04-09 01:29:34 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:29:37.083074 | orchestrator | 2026-04-09 01:29:37 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) completed with status ACTIVE 2026-04-09 01:29:37.083616 | orchestrator | 2026-04-09 01:29:37 | INFO  | Live migrating server 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 2026-04-09 01:29:48.962623 | orchestrator | 2026-04-09 01:29:48 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:29:51.431652 | orchestrator | 2026-04-09 01:29:51 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:29:53.679759 | orchestrator | 2026-04-09 01:29:53 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:29:55.916546 | orchestrator | 2026-04-09 01:29:55 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:29:58.275157 | orchestrator | 2026-04-09 01:29:58 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:30:00.582339 | orchestrator | 2026-04-09 01:30:00 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:30:02.942006 | orchestrator | 2026-04-09 01:30:02 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:30:05.164003 | orchestrator | 2026-04-09 01:30:05 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:30:07.547268 | orchestrator | 2026-04-09 01:30:07 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) completed with status ACTIVE 2026-04-09 01:30:07.547403 | orchestrator | 2026-04-09 01:30:07 | INFO  | Live migrating server da955dc3-e2dc-40d0-8b0d-45d177d70f7f 2026-04-09 01:30:19.699525 | orchestrator | 2026-04-09 01:30:19 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:21.997746 | orchestrator | 2026-04-09 01:30:21 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:24.414790 | orchestrator | 2026-04-09 01:30:24 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:26.713817 | orchestrator | 2026-04-09 01:30:26 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:28.897889 | orchestrator | 2026-04-09 01:30:28 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:31.325159 | orchestrator | 2026-04-09 01:30:31 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:33.558153 | orchestrator | 2026-04-09 01:30:33 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:35.839887 | orchestrator | 2026-04-09 01:30:35 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:30:38.210640 | orchestrator | 2026-04-09 01:30:38 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) completed with status ACTIVE 2026-04-09 01:30:38.210712 | orchestrator | 2026-04-09 01:30:38 | INFO  | Live migrating server 8be5dc37-febe-4e41-8648-e76027e991a7 2026-04-09 01:30:50.001021 | orchestrator | 2026-04-09 01:30:49 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:30:52.434817 | orchestrator | 2026-04-09 01:30:52 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:30:54.739186 | orchestrator | 2026-04-09 01:30:54 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:30:56.965852 | orchestrator | 2026-04-09 01:30:56 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:30:59.247202 | orchestrator | 2026-04-09 01:30:59 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:31:01.510746 | orchestrator | 2026-04-09 01:31:01 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:31:03.982138 | orchestrator | 2026-04-09 01:31:03 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:31:06.190378 | orchestrator | 2026-04-09 01:31:06 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:31:08.440697 | orchestrator | 2026-04-09 01:31:08 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) completed with status ACTIVE 2026-04-09 01:31:08.440762 | orchestrator | 2026-04-09 01:31:08 | INFO  | Live migrating server b8ca9ad9-9664-4929-bd79-74c25be32b8c 2026-04-09 01:31:18.439389 | orchestrator | 2026-04-09 01:31:18 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:20.824407 | orchestrator | 2026-04-09 01:31:20 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:23.270236 | orchestrator | 2026-04-09 01:31:23 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:25.583403 | orchestrator | 2026-04-09 01:31:25 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:27.874214 | orchestrator | 2026-04-09 01:31:27 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:30.167564 | orchestrator | 2026-04-09 01:31:30 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:32.405282 | orchestrator | 2026-04-09 01:31:32 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:34.702608 | orchestrator | 2026-04-09 01:31:34 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:37.033567 | orchestrator | 2026-04-09 01:31:37 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:39.251371 | orchestrator | 2026-04-09 01:31:39 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:31:41.579055 | orchestrator | 2026-04-09 01:31:41 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) completed with status ACTIVE 2026-04-09 01:31:41.878513 | orchestrator | + compute_list 2026-04-09 01:31:41.878586 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:31:43.514497 | orchestrator | 2026-04-09 01:31:43 | ERROR  | Unable to get ansible vault password 2026-04-09 01:31:43.514589 | orchestrator | 2026-04-09 01:31:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:31:43.514603 | orchestrator | 2026-04-09 01:31:43 | ERROR  | Dropping encrypted entries 2026-04-09 01:31:44.656867 | orchestrator | +------+--------+----------+ 2026-04-09 01:31:44.656938 | orchestrator | | ID | Name | Status | 2026-04-09 01:31:44.656949 | orchestrator | |------+--------+----------| 2026-04-09 01:31:44.656957 | orchestrator | +------+--------+----------+ 2026-04-09 01:31:45.024512 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:31:46.540679 | orchestrator | 2026-04-09 01:31:46 | ERROR  | Unable to get ansible vault password 2026-04-09 01:31:46.540728 | orchestrator | 2026-04-09 01:31:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:31:46.540735 | orchestrator | 2026-04-09 01:31:46 | ERROR  | Dropping encrypted entries 2026-04-09 01:31:47.976652 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:31:47.976716 | orchestrator | | ID | Name | Status | 2026-04-09 01:31:47.976723 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:31:47.976729 | orchestrator | | f3c8cd64-430f-4006-9746-b1f85c50090f | test-4 | ACTIVE | 2026-04-09 01:31:47.976734 | orchestrator | | 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 | test-3 | ACTIVE | 2026-04-09 01:31:47.976739 | orchestrator | | da955dc3-e2dc-40d0-8b0d-45d177d70f7f | test-2 | ACTIVE | 2026-04-09 01:31:47.976744 | orchestrator | | 8be5dc37-febe-4e41-8648-e76027e991a7 | test-1 | ACTIVE | 2026-04-09 01:31:47.976749 | orchestrator | | b8ca9ad9-9664-4929-bd79-74c25be32b8c | test | ACTIVE | 2026-04-09 01:31:47.976754 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:31:48.319683 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:31:49.842580 | orchestrator | 2026-04-09 01:31:49 | ERROR  | Unable to get ansible vault password 2026-04-09 01:31:49.843544 | orchestrator | 2026-04-09 01:31:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:31:49.843588 | orchestrator | 2026-04-09 01:31:49 | ERROR  | Dropping encrypted entries 2026-04-09 01:31:51.059538 | orchestrator | +------+--------+----------+ 2026-04-09 01:31:51.059612 | orchestrator | | ID | Name | Status | 2026-04-09 01:31:51.059618 | orchestrator | |------+--------+----------| 2026-04-09 01:31:51.059622 | orchestrator | +------+--------+----------+ 2026-04-09 01:31:51.358617 | orchestrator | + server_ping 2026-04-09 01:31:51.359824 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:31:51.359868 | orchestrator | ++ tr -d '\r' 2026-04-09 01:31:54.221640 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:31:54.221708 | orchestrator | + ping -c3 192.168.112.143 2026-04-09 01:31:54.234881 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-04-09 01:31:54.234962 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=9.56 ms 2026-04-09 01:31:55.228597 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.31 ms 2026-04-09 01:31:56.228336 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.61 ms 2026-04-09 01:31:56.228450 | orchestrator | 2026-04-09 01:31:56.228459 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-04-09 01:31:56.228466 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-09 01:31:56.228484 | orchestrator | rtt min/avg/max/mdev = 1.613/4.491/9.555/3.591 ms 2026-04-09 01:31:56.228537 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:31:56.228543 | orchestrator | + ping -c3 192.168.112.109 2026-04-09 01:31:56.243642 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-09 01:31:56.243731 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=10.2 ms 2026-04-09 01:31:57.237618 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.39 ms 2026-04-09 01:31:58.239087 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.98 ms 2026-04-09 01:31:58.239173 | orchestrator | 2026-04-09 01:31:58.239183 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-09 01:31:58.239192 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:31:58.239198 | orchestrator | rtt min/avg/max/mdev = 1.984/4.855/10.195/3.779 ms 2026-04-09 01:31:58.239206 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:31:58.239213 | orchestrator | + ping -c3 192.168.112.185 2026-04-09 01:31:58.249070 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-09 01:31:58.249142 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=6.82 ms 2026-04-09 01:31:59.246515 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.09 ms 2026-04-09 01:32:00.248036 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.64 ms 2026-04-09 01:32:00.248129 | orchestrator | 2026-04-09 01:32:00.248170 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-09 01:32:00.248179 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:32:00.248227 | orchestrator | rtt min/avg/max/mdev = 1.644/3.517/6.817/2.340 ms 2026-04-09 01:32:00.248586 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:32:00.248611 | orchestrator | + ping -c3 192.168.112.103 2026-04-09 01:32:00.256884 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-09 01:32:00.256970 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=5.86 ms 2026-04-09 01:32:01.255177 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.06 ms 2026-04-09 01:32:02.256705 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-09 01:32:02.257497 | orchestrator | 2026-04-09 01:32:02.257550 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-09 01:32:02.257562 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:32:02.257570 | orchestrator | rtt min/avg/max/mdev = 1.706/3.209/5.860/1.879 ms 2026-04-09 01:32:02.257593 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:32:02.257603 | orchestrator | + ping -c3 192.168.112.175 2026-04-09 01:32:02.266210 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2026-04-09 01:32:02.266283 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.13 ms 2026-04-09 01:32:03.263810 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.30 ms 2026-04-09 01:32:04.264256 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.42 ms 2026-04-09 01:32:04.264329 | orchestrator | 2026-04-09 01:32:04.264338 | orchestrator | --- 192.168.112.175 ping statistics --- 2026-04-09 01:32:04.264344 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:32:04.264350 | orchestrator | rtt min/avg/max/mdev = 1.424/3.282/6.129/2.043 ms 2026-04-09 01:32:04.265427 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-09 01:32:05.816558 | orchestrator | 2026-04-09 01:32:05 | ERROR  | Unable to get ansible vault password 2026-04-09 01:32:05.816631 | orchestrator | 2026-04-09 01:32:05 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:32:05.816640 | orchestrator | 2026-04-09 01:32:05 | ERROR  | Dropping encrypted entries 2026-04-09 01:32:07.468659 | orchestrator | 2026-04-09 01:32:07 | INFO  | Live migrating server f3c8cd64-430f-4006-9746-b1f85c50090f 2026-04-09 01:32:17.756913 | orchestrator | 2026-04-09 01:32:17 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:20.035344 | orchestrator | 2026-04-09 01:32:20 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:22.308692 | orchestrator | 2026-04-09 01:32:22 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:24.536216 | orchestrator | 2026-04-09 01:32:24 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:26.839722 | orchestrator | 2026-04-09 01:32:26 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:29.134280 | orchestrator | 2026-04-09 01:32:29 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:31.416626 | orchestrator | 2026-04-09 01:32:31 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:33.701252 | orchestrator | 2026-04-09 01:32:33 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) is still in progress 2026-04-09 01:32:36.032097 | orchestrator | 2026-04-09 01:32:36 | INFO  | Live migration of f3c8cd64-430f-4006-9746-b1f85c50090f (test-4) completed with status ACTIVE 2026-04-09 01:32:36.032197 | orchestrator | 2026-04-09 01:32:36 | INFO  | Live migrating server 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 2026-04-09 01:32:46.280914 | orchestrator | 2026-04-09 01:32:46 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:32:48.553456 | orchestrator | 2026-04-09 01:32:48 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:32:50.789753 | orchestrator | 2026-04-09 01:32:50 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:32:53.126802 | orchestrator | 2026-04-09 01:32:53 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:32:55.428635 | orchestrator | 2026-04-09 01:32:55 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:32:57.645117 | orchestrator | 2026-04-09 01:32:57 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:32:59.944806 | orchestrator | 2026-04-09 01:32:59 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:33:02.325087 | orchestrator | 2026-04-09 01:33:02 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) is still in progress 2026-04-09 01:33:04.573754 | orchestrator | 2026-04-09 01:33:04 | INFO  | Live migration of 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 (test-3) completed with status ACTIVE 2026-04-09 01:33:04.573828 | orchestrator | 2026-04-09 01:33:04 | INFO  | Live migrating server da955dc3-e2dc-40d0-8b0d-45d177d70f7f 2026-04-09 01:33:13.735430 | orchestrator | 2026-04-09 01:33:13 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:16.037559 | orchestrator | 2026-04-09 01:33:16 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:18.451701 | orchestrator | 2026-04-09 01:33:18 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:20.747502 | orchestrator | 2026-04-09 01:33:20 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:23.087567 | orchestrator | 2026-04-09 01:33:23 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:25.358225 | orchestrator | 2026-04-09 01:33:25 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:27.571707 | orchestrator | 2026-04-09 01:33:27 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:29.796551 | orchestrator | 2026-04-09 01:33:29 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) is still in progress 2026-04-09 01:33:32.111590 | orchestrator | 2026-04-09 01:33:32 | INFO  | Live migration of da955dc3-e2dc-40d0-8b0d-45d177d70f7f (test-2) completed with status ACTIVE 2026-04-09 01:33:32.111647 | orchestrator | 2026-04-09 01:33:32 | INFO  | Live migrating server 8be5dc37-febe-4e41-8648-e76027e991a7 2026-04-09 01:33:41.283525 | orchestrator | 2026-04-09 01:33:41 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:43.632150 | orchestrator | 2026-04-09 01:33:43 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:45.966655 | orchestrator | 2026-04-09 01:33:45 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:48.263045 | orchestrator | 2026-04-09 01:33:48 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:50.608831 | orchestrator | 2026-04-09 01:33:50 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:52.880319 | orchestrator | 2026-04-09 01:33:52 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:55.279230 | orchestrator | 2026-04-09 01:33:55 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:57.614386 | orchestrator | 2026-04-09 01:33:57 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:33:59.913394 | orchestrator | 2026-04-09 01:33:59 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) is still in progress 2026-04-09 01:34:02.142195 | orchestrator | 2026-04-09 01:34:02 | INFO  | Live migration of 8be5dc37-febe-4e41-8648-e76027e991a7 (test-1) completed with status ACTIVE 2026-04-09 01:34:02.142251 | orchestrator | 2026-04-09 01:34:02 | INFO  | Live migrating server b8ca9ad9-9664-4929-bd79-74c25be32b8c 2026-04-09 01:34:11.761673 | orchestrator | 2026-04-09 01:34:11 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:14.173586 | orchestrator | 2026-04-09 01:34:14 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:16.472963 | orchestrator | 2026-04-09 01:34:16 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:18.753144 | orchestrator | 2026-04-09 01:34:18 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:21.033905 | orchestrator | 2026-04-09 01:34:21 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:23.362149 | orchestrator | 2026-04-09 01:34:23 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:25.777231 | orchestrator | 2026-04-09 01:34:25 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:28.254922 | orchestrator | 2026-04-09 01:34:28 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:30.718162 | orchestrator | 2026-04-09 01:34:30 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:32.983228 | orchestrator | 2026-04-09 01:34:32 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:35.284117 | orchestrator | 2026-04-09 01:34:35 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) is still in progress 2026-04-09 01:34:37.576364 | orchestrator | 2026-04-09 01:34:37 | INFO  | Live migration of b8ca9ad9-9664-4929-bd79-74c25be32b8c (test) completed with status ACTIVE 2026-04-09 01:34:37.888905 | orchestrator | + compute_list 2026-04-09 01:34:37.888988 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:34:39.571716 | orchestrator | 2026-04-09 01:34:39 | ERROR  | Unable to get ansible vault password 2026-04-09 01:34:39.571808 | orchestrator | 2026-04-09 01:34:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:34:39.571820 | orchestrator | 2026-04-09 01:34:39 | ERROR  | Dropping encrypted entries 2026-04-09 01:34:40.776897 | orchestrator | +------+--------+----------+ 2026-04-09 01:34:40.777036 | orchestrator | | ID | Name | Status | 2026-04-09 01:34:40.777048 | orchestrator | |------+--------+----------| 2026-04-09 01:34:40.777054 | orchestrator | +------+--------+----------+ 2026-04-09 01:34:41.063613 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:34:42.645541 | orchestrator | 2026-04-09 01:34:42 | ERROR  | Unable to get ansible vault password 2026-04-09 01:34:42.646101 | orchestrator | 2026-04-09 01:34:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:34:42.646137 | orchestrator | 2026-04-09 01:34:42 | ERROR  | Dropping encrypted entries 2026-04-09 01:34:43.731444 | orchestrator | +------+--------+----------+ 2026-04-09 01:34:43.731507 | orchestrator | | ID | Name | Status | 2026-04-09 01:34:43.731518 | orchestrator | |------+--------+----------| 2026-04-09 01:34:43.731524 | orchestrator | +------+--------+----------+ 2026-04-09 01:34:44.025398 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:34:45.693691 | orchestrator | 2026-04-09 01:34:45 | ERROR  | Unable to get ansible vault password 2026-04-09 01:34:45.693762 | orchestrator | 2026-04-09 01:34:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:34:45.693770 | orchestrator | 2026-04-09 01:34:45 | ERROR  | Dropping encrypted entries 2026-04-09 01:34:47.364894 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:34:47.364986 | orchestrator | | ID | Name | Status | 2026-04-09 01:34:47.364997 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:34:47.365003 | orchestrator | | f3c8cd64-430f-4006-9746-b1f85c50090f | test-4 | ACTIVE | 2026-04-09 01:34:47.365009 | orchestrator | | 4b6a5de8-657d-4a77-b4a3-c344d174c4d3 | test-3 | ACTIVE | 2026-04-09 01:34:47.365014 | orchestrator | | da955dc3-e2dc-40d0-8b0d-45d177d70f7f | test-2 | ACTIVE | 2026-04-09 01:34:47.365021 | orchestrator | | 8be5dc37-febe-4e41-8648-e76027e991a7 | test-1 | ACTIVE | 2026-04-09 01:34:47.365026 | orchestrator | | b8ca9ad9-9664-4929-bd79-74c25be32b8c | test | ACTIVE | 2026-04-09 01:34:47.365052 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:34:47.696664 | orchestrator | + server_ping 2026-04-09 01:34:47.698677 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:34:47.698861 | orchestrator | ++ tr -d '\r' 2026-04-09 01:34:50.756876 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:34:50.756975 | orchestrator | + ping -c3 192.168.112.143 2026-04-09 01:34:50.768472 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-04-09 01:34:50.768562 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=7.83 ms 2026-04-09 01:34:51.763426 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=1.63 ms 2026-04-09 01:34:52.764937 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.74 ms 2026-04-09 01:34:52.765913 | orchestrator | 2026-04-09 01:34:52.765956 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-04-09 01:34:52.765968 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:34:52.765976 | orchestrator | rtt min/avg/max/mdev = 1.633/3.734/7.832/2.897 ms 2026-04-09 01:34:52.765999 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:34:52.766011 | orchestrator | + ping -c3 192.168.112.109 2026-04-09 01:34:52.777533 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-09 01:34:52.777604 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.29 ms 2026-04-09 01:34:53.773158 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.25 ms 2026-04-09 01:34:54.772473 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.16 ms 2026-04-09 01:34:54.772540 | orchestrator | 2026-04-09 01:34:54.772553 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-09 01:34:54.772563 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-09 01:34:54.772590 | orchestrator | rtt min/avg/max/mdev = 1.159/3.565/7.288/2.669 ms 2026-04-09 01:34:54.773148 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:34:54.773170 | orchestrator | + ping -c3 192.168.112.185 2026-04-09 01:34:54.780456 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-04-09 01:34:54.780503 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=4.50 ms 2026-04-09 01:34:55.779765 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.06 ms 2026-04-09 01:34:56.781773 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.35 ms 2026-04-09 01:34:56.781880 | orchestrator | 2026-04-09 01:34:56.781893 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-04-09 01:34:56.781901 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:34:56.781908 | orchestrator | rtt min/avg/max/mdev = 2.062/2.971/4.504/1.090 ms 2026-04-09 01:34:56.781993 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:34:56.782003 | orchestrator | + ping -c3 192.168.112.103 2026-04-09 01:34:56.794862 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-09 01:34:56.794933 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=8.70 ms 2026-04-09 01:34:57.790586 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.36 ms 2026-04-09 01:34:58.791600 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.70 ms 2026-04-09 01:34:58.791696 | orchestrator | 2026-04-09 01:34:58.791708 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-09 01:34:58.791716 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:34:58.791723 | orchestrator | rtt min/avg/max/mdev = 1.703/4.254/8.699/3.154 ms 2026-04-09 01:34:58.792012 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:34:58.792029 | orchestrator | + ping -c3 192.168.112.175 2026-04-09 01:34:58.802721 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2026-04-09 01:34:58.802792 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.13 ms 2026-04-09 01:34:59.800448 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.05 ms 2026-04-09 01:35:00.801866 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.91 ms 2026-04-09 01:35:00.801958 | orchestrator | 2026-04-09 01:35:00.801970 | orchestrator | --- 192.168.112.175 ping statistics --- 2026-04-09 01:35:00.801979 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:35:00.801986 | orchestrator | rtt min/avg/max/mdev = 1.907/3.363/6.131/1.957 ms 2026-04-09 01:35:00.944309 | orchestrator | ok: Runtime: 0:18:43.741832 2026-04-09 01:35:01.004823 | 2026-04-09 01:35:01.005001 | TASK [Run tempest] 2026-04-09 01:35:01.754943 | orchestrator | 2026-04-09 01:35:01.755199 | orchestrator | # Tempest 2026-04-09 01:35:01.755220 | orchestrator | 2026-04-09 01:35:01.755229 | orchestrator | + set -e 2026-04-09 01:35:01.755240 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:35:01.755250 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:35:01.755349 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:35:01.755383 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:35:01.755396 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:35:01.755404 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:35:01.755413 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:35:01.755424 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:35:01.755435 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:35:01.755442 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-04-09 01:35:01.755453 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-04-09 01:35:01.755459 | orchestrator | ++ export ARA=false 2026-04-09 01:35:01.755470 | orchestrator | ++ ARA=false 2026-04-09 01:35:01.755484 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:35:01.755491 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:35:01.755498 | orchestrator | ++ export TEMPEST=true 2026-04-09 01:35:01.755508 | orchestrator | ++ TEMPEST=true 2026-04-09 01:35:01.755514 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:35:01.755520 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:35:01.755528 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 01:35:01.755535 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2026-04-09 01:35:01.755542 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:35:01.755547 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:35:01.755551 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:35:01.755555 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:35:01.755558 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:35:01.755563 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:35:01.755567 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:35:01.755571 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:35:01.755575 | orchestrator | + echo 2026-04-09 01:35:01.755579 | orchestrator | + echo '# Tempest' 2026-04-09 01:35:01.755583 | orchestrator | + echo 2026-04-09 01:35:01.755587 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-09 01:35:01.755591 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-09 01:35:13.056307 | orchestrator | 2026-04-09 01:35:13 | INFO  | Prepare task for execution of tempest. 2026-04-09 01:35:13.132909 | orchestrator | 2026-04-09 01:35:13 | INFO  | Task d7a849f9-dd23-40ec-91b7-2a5a779dc9b8 (tempest) was prepared for execution. 2026-04-09 01:35:13.132985 | orchestrator | 2026-04-09 01:35:13 | INFO  | It takes a moment until task d7a849f9-dd23-40ec-91b7-2a5a779dc9b8 (tempest) has been started and output is visible here. 2026-04-09 01:36:29.966830 | orchestrator | 2026-04-09 01:36:29.966941 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-09 01:36:29.966960 | orchestrator | 2026-04-09 01:36:29.966972 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-09 01:36:29.966994 | orchestrator | Thursday 09 April 2026 01:35:16 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-04-09 01:36:29.967005 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967017 | orchestrator | 2026-04-09 01:36:29.967027 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-09 01:36:29.967033 | orchestrator | Thursday 09 April 2026 01:35:17 +0000 (0:00:01.075) 0:00:01.382 ******** 2026-04-09 01:36:29.967041 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967047 | orchestrator | 2026-04-09 01:36:29.967053 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-09 01:36:29.967060 | orchestrator | Thursday 09 April 2026 01:35:18 +0000 (0:00:01.184) 0:00:02.567 ******** 2026-04-09 01:36:29.967067 | orchestrator | ok: [testbed-manager] 2026-04-09 01:36:29.967074 | orchestrator | 2026-04-09 01:36:29.967080 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-09 01:36:29.967087 | orchestrator | Thursday 09 April 2026 01:35:19 +0000 (0:00:00.458) 0:00:03.026 ******** 2026-04-09 01:36:29.967093 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967099 | orchestrator | 2026-04-09 01:36:29.967106 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-09 01:36:29.967112 | orchestrator | Thursday 09 April 2026 01:35:40 +0000 (0:00:21.687) 0:00:24.714 ******** 2026-04-09 01:36:29.967146 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-09 01:36:29.967153 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-09 01:36:29.967163 | orchestrator | 2026-04-09 01:36:29.967169 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-09 01:36:29.967175 | orchestrator | Thursday 09 April 2026 01:35:48 +0000 (0:00:08.099) 0:00:32.814 ******** 2026-04-09 01:36:29.967181 | orchestrator | ok: [testbed-manager] => { 2026-04-09 01:36:29.967188 | orchestrator |  "changed": false, 2026-04-09 01:36:29.967194 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:36:29.967200 | orchestrator | } 2026-04-09 01:36:29.967207 | orchestrator | 2026-04-09 01:36:29.967213 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-09 01:36:29.967219 | orchestrator | Thursday 09 April 2026 01:35:49 +0000 (0:00:00.149) 0:00:32.963 ******** 2026-04-09 01:36:29.967226 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967256 | orchestrator | 2026-04-09 01:36:29.967267 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-09 01:36:29.967275 | orchestrator | Thursday 09 April 2026 01:35:52 +0000 (0:00:03.569) 0:00:36.532 ******** 2026-04-09 01:36:29.967282 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967288 | orchestrator | 2026-04-09 01:36:29.967294 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-09 01:36:29.967319 | orchestrator | Thursday 09 April 2026 01:35:54 +0000 (0:00:01.831) 0:00:38.364 ******** 2026-04-09 01:36:29.967326 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967332 | orchestrator | 2026-04-09 01:36:29.967338 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-09 01:36:29.967345 | orchestrator | Thursday 09 April 2026 01:35:58 +0000 (0:00:03.662) 0:00:42.027 ******** 2026-04-09 01:36:29.967351 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967357 | orchestrator | 2026-04-09 01:36:29.967363 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-09 01:36:29.967370 | orchestrator | Thursday 09 April 2026 01:35:58 +0000 (0:00:00.208) 0:00:42.236 ******** 2026-04-09 01:36:29.967376 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967382 | orchestrator | 2026-04-09 01:36:29.967389 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-09 01:36:29.967395 | orchestrator | Thursday 09 April 2026 01:36:00 +0000 (0:00:02.528) 0:00:44.764 ******** 2026-04-09 01:36:29.967401 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967407 | orchestrator | 2026-04-09 01:36:29.967414 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-09 01:36:29.967431 | orchestrator | Thursday 09 April 2026 01:36:09 +0000 (0:00:08.942) 0:00:53.707 ******** 2026-04-09 01:36:29.967456 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967469 | orchestrator | 2026-04-09 01:36:29.967479 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-09 01:36:29.967490 | orchestrator | Thursday 09 April 2026 01:36:10 +0000 (0:00:00.694) 0:00:54.401 ******** 2026-04-09 01:36:29.967499 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967506 | orchestrator | 2026-04-09 01:36:29.967512 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-09 01:36:29.967519 | orchestrator | Thursday 09 April 2026 01:36:12 +0000 (0:00:01.530) 0:00:55.931 ******** 2026-04-09 01:36:29.967525 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967531 | orchestrator | 2026-04-09 01:36:29.967537 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-09 01:36:29.967543 | orchestrator | Thursday 09 April 2026 01:36:13 +0000 (0:00:01.634) 0:00:57.566 ******** 2026-04-09 01:36:29.967549 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967555 | orchestrator | 2026-04-09 01:36:29.967562 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-09 01:36:29.967576 | orchestrator | Thursday 09 April 2026 01:36:13 +0000 (0:00:00.184) 0:00:57.750 ******** 2026-04-09 01:36:29.967582 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967588 | orchestrator | 2026-04-09 01:36:29.967601 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-09 01:36:29.967608 | orchestrator | Thursday 09 April 2026 01:36:14 +0000 (0:00:00.379) 0:00:58.130 ******** 2026-04-09 01:36:29.967614 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:36:29.967620 | orchestrator | 2026-04-09 01:36:29.967626 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-09 01:36:29.967650 | orchestrator | Thursday 09 April 2026 01:36:18 +0000 (0:00:04.021) 0:01:02.151 ******** 2026-04-09 01:36:29.967656 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-09 01:36:29.967663 | orchestrator |  "changed": false, 2026-04-09 01:36:29.967669 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:36:29.967676 | orchestrator | } 2026-04-09 01:36:29.967682 | orchestrator | 2026-04-09 01:36:29.967689 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-09 01:36:29.967695 | orchestrator | Thursday 09 April 2026 01:36:18 +0000 (0:00:00.197) 0:01:02.349 ******** 2026-04-09 01:36:29.967701 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-09 01:36:29.967709 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-09 01:36:29.967715 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:36:29.967721 | orchestrator | 2026-04-09 01:36:29.967727 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-09 01:36:29.967734 | orchestrator | Thursday 09 April 2026 01:36:18 +0000 (0:00:00.179) 0:01:02.528 ******** 2026-04-09 01:36:29.967744 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:36:29.967754 | orchestrator | 2026-04-09 01:36:29.967764 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-09 01:36:29.967774 | orchestrator | Thursday 09 April 2026 01:36:18 +0000 (0:00:00.159) 0:01:02.688 ******** 2026-04-09 01:36:29.967785 | orchestrator | ok: [testbed-manager] 2026-04-09 01:36:29.967795 | orchestrator | 2026-04-09 01:36:29.967807 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-09 01:36:29.967815 | orchestrator | Thursday 09 April 2026 01:36:19 +0000 (0:00:00.470) 0:01:03.159 ******** 2026-04-09 01:36:29.967821 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967828 | orchestrator | 2026-04-09 01:36:29.967834 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-09 01:36:29.967840 | orchestrator | Thursday 09 April 2026 01:36:20 +0000 (0:00:00.880) 0:01:04.040 ******** 2026-04-09 01:36:29.967846 | orchestrator | ok: [testbed-manager] 2026-04-09 01:36:29.967852 | orchestrator | 2026-04-09 01:36:29.967858 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-09 01:36:29.967865 | orchestrator | Thursday 09 April 2026 01:36:20 +0000 (0:00:00.466) 0:01:04.507 ******** 2026-04-09 01:36:29.967871 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:36:29.967877 | orchestrator | 2026-04-09 01:36:29.967883 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-09 01:36:29.967889 | orchestrator | Thursday 09 April 2026 01:36:20 +0000 (0:00:00.349) 0:01:04.856 ******** 2026-04-09 01:36:29.967895 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-09 01:36:29.967902 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-09 01:36:29.967908 | orchestrator | 2026-04-09 01:36:29.967914 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-09 01:36:29.967921 | orchestrator | Thursday 09 April 2026 01:36:28 +0000 (0:00:07.969) 0:01:12.826 ******** 2026-04-09 01:36:29.967927 | orchestrator | changed: [testbed-manager] 2026-04-09 01:36:29.967933 | orchestrator | 2026-04-09 01:36:29.967944 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:36:29.967952 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 01:36:29.967959 | orchestrator | 2026-04-09 01:36:29.967965 | orchestrator | 2026-04-09 01:36:29.967972 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:36:29.967978 | orchestrator | Thursday 09 April 2026 01:36:29 +0000 (0:00:01.020) 0:01:13.846 ******** 2026-04-09 01:36:29.967984 | orchestrator | =============================================================================== 2026-04-09 01:36:29.967990 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 21.69s 2026-04-09 01:36:29.967996 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.94s 2026-04-09 01:36:29.968002 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.10s 2026-04-09 01:36:29.968008 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.97s 2026-04-09 01:36:29.968019 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.02s 2026-04-09 01:36:29.968026 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.66s 2026-04-09 01:36:29.968032 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.57s 2026-04-09 01:36:29.968038 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.53s 2026-04-09 01:36:29.968045 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.83s 2026-04-09 01:36:29.968051 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.63s 2026-04-09 01:36:29.968057 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.53s 2026-04-09 01:36:29.968064 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.18s 2026-04-09 01:36:29.968070 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.08s 2026-04-09 01:36:29.968076 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.02s 2026-04-09 01:36:29.968082 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.88s 2026-04-09 01:36:29.968088 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.69s 2026-04-09 01:36:29.968095 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.47s 2026-04-09 01:36:29.968105 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.47s 2026-04-09 01:36:30.243138 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.46s 2026-04-09 01:36:30.243228 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.38s 2026-04-09 01:36:30.489246 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-09 01:36:30.495801 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-09 01:36:30.501764 | orchestrator | 2026-04-09 01:36:30.501847 | orchestrator | ## IDENTITY (API) 2026-04-09 01:36:30.501857 | orchestrator | 2026-04-09 01:36:30.501864 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-09 01:36:30.501870 | orchestrator | + echo 2026-04-09 01:36:30.501876 | orchestrator | + echo '## IDENTITY (API)' 2026-04-09 01:36:30.501882 | orchestrator | + echo 2026-04-09 01:36:30.501889 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-09 01:36:30.501896 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-09 01:36:30.503486 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-09 01:36:30.504366 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:36:30.506913 | orchestrator | + tee -a /opt/tempest/20260409-0136.log 2026-04-09 01:36:32.710956 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:36:32.711059 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:36:32.711067 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:36:32.711073 | orchestrator | 2026-04-09 01:36:32.711079 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:36:32.711084 | orchestrator | framework. For more detail see 2026-04-09 01:36:32.711089 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:36:32.711093 | orchestrator | 2026-04-09 01:36:32.711098 | orchestrator | __import__(import_str) 2026-04-09 01:36:34.290316 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:36:34.290446 | orchestrator | Did you mean one of these? 2026-04-09 01:36:34.290469 | orchestrator | help 2026-04-09 01:36:34.290486 | orchestrator | init 2026-04-09 01:36:34.690184 | orchestrator | 2026-04-09 01:36:34.690347 | orchestrator | ## IMAGE (API) 2026-04-09 01:36:34.690366 | orchestrator | 2026-04-09 01:36:34.690379 | orchestrator | + echo 2026-04-09 01:36:34.690390 | orchestrator | + echo '## IMAGE (API)' 2026-04-09 01:36:34.690402 | orchestrator | + echo 2026-04-09 01:36:34.690414 | orchestrator | + _tempest tempest.api.image.v2 2026-04-09 01:36:34.690425 | orchestrator | + local regex=tempest.api.image.v2 2026-04-09 01:36:34.690439 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-09 01:36:34.690965 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:36:34.695644 | orchestrator | + tee -a /opt/tempest/20260409-0136.log 2026-04-09 01:36:36.684803 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:36:36.684905 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:36:36.684923 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:36:36.684937 | orchestrator | 2026-04-09 01:36:36.684946 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:36:36.684954 | orchestrator | framework. For more detail see 2026-04-09 01:36:36.684962 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:36:36.684970 | orchestrator | 2026-04-09 01:36:36.685000 | orchestrator | __import__(import_str) 2026-04-09 01:36:38.200304 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:36:38.200409 | orchestrator | Did you mean one of these? 2026-04-09 01:36:38.200424 | orchestrator | help 2026-04-09 01:36:38.200433 | orchestrator | init 2026-04-09 01:36:38.563214 | orchestrator | 2026-04-09 01:36:39.138410 | orchestrator | ## NETWORK (API) 2026-04-09 01:36:39.138491 | orchestrator | 2026-04-09 01:36:39.138509 | orchestrator | + echo 2026-04-09 01:36:39.138522 | orchestrator | + echo '## NETWORK (API)' 2026-04-09 01:36:39.138534 | orchestrator | + echo 2026-04-09 01:36:39.138545 | orchestrator | + _tempest tempest.api.network 2026-04-09 01:36:39.138557 | orchestrator | + local regex=tempest.api.network 2026-04-09 01:36:39.138616 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-09 01:36:39.138633 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:36:39.138644 | orchestrator | + tee -a /opt/tempest/20260409-0136.log 2026-04-09 01:36:40.590976 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:36:40.591053 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:36:40.591066 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:36:40.591108 | orchestrator | 2026-04-09 01:36:40.591114 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:36:40.591136 | orchestrator | framework. For more detail see 2026-04-09 01:36:40.591141 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:36:40.591145 | orchestrator | 2026-04-09 01:36:40.591150 | orchestrator | __import__(import_str) 2026-04-09 01:36:42.098103 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:36:42.098185 | orchestrator | Did you mean one of these? 2026-04-09 01:36:42.098196 | orchestrator | help 2026-04-09 01:36:42.098202 | orchestrator | init 2026-04-09 01:36:42.467403 | orchestrator | 2026-04-09 01:36:42.467497 | orchestrator | ## VOLUME (API) 2026-04-09 01:36:42.467514 | orchestrator | 2026-04-09 01:36:42.467526 | orchestrator | + echo 2026-04-09 01:36:42.467537 | orchestrator | + echo '## VOLUME (API)' 2026-04-09 01:36:42.467550 | orchestrator | + echo 2026-04-09 01:36:42.467561 | orchestrator | + _tempest tempest.api.volume 2026-04-09 01:36:42.467573 | orchestrator | + local regex=tempest.api.volume 2026-04-09 01:36:42.467854 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-09 01:36:42.468744 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:36:42.471173 | orchestrator | + tee -a /opt/tempest/20260409-0136.log 2026-04-09 01:36:44.465993 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:36:44.466101 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:36:44.466108 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:36:44.466113 | orchestrator | 2026-04-09 01:36:44.466118 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:36:44.466122 | orchestrator | framework. For more detail see 2026-04-09 01:36:44.466128 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:36:44.466132 | orchestrator | 2026-04-09 01:36:44.466136 | orchestrator | __import__(import_str) 2026-04-09 01:36:45.946749 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:36:45.946842 | orchestrator | Did you mean one of these? 2026-04-09 01:36:45.946855 | orchestrator | help 2026-04-09 01:36:45.946864 | orchestrator | init 2026-04-09 01:36:46.302132 | orchestrator | 2026-04-09 01:36:46.302203 | orchestrator | ## COMPUTE (API) 2026-04-09 01:36:46.302214 | orchestrator | 2026-04-09 01:36:46.302221 | orchestrator | + echo 2026-04-09 01:36:46.302265 | orchestrator | + echo '## COMPUTE (API)' 2026-04-09 01:36:46.302273 | orchestrator | + echo 2026-04-09 01:36:46.302280 | orchestrator | + _tempest tempest.api.compute 2026-04-09 01:36:46.302287 | orchestrator | + local regex=tempest.api.compute 2026-04-09 01:36:46.303431 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-09 01:36:46.304025 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:36:46.308617 | orchestrator | + tee -a /opt/tempest/20260409-0136.log 2026-04-09 01:36:48.406902 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:36:48.406969 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:36:48.406976 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:36:48.406981 | orchestrator | 2026-04-09 01:36:48.406988 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:36:48.406994 | orchestrator | framework. For more detail see 2026-04-09 01:36:48.407001 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:36:48.407007 | orchestrator | 2026-04-09 01:36:48.407018 | orchestrator | __import__(import_str) 2026-04-09 01:36:49.919155 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:36:49.919280 | orchestrator | Did you mean one of these? 2026-04-09 01:36:49.919295 | orchestrator | help 2026-04-09 01:36:49.919302 | orchestrator | init 2026-04-09 01:36:50.281849 | orchestrator | 2026-04-09 01:36:50.281901 | orchestrator | ## DNS (API) 2026-04-09 01:36:50.281906 | orchestrator | 2026-04-09 01:36:50.281910 | orchestrator | + echo 2026-04-09 01:36:50.281915 | orchestrator | + echo '## DNS (API)' 2026-04-09 01:36:50.281920 | orchestrator | + echo 2026-04-09 01:36:50.281924 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-09 01:36:50.281929 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-09 01:36:50.282118 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-09 01:36:50.283685 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:36:50.293312 | orchestrator | + tee -a /opt/tempest/20260409-0136.log 2026-04-09 01:36:52.261688 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:36:52.480831 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:36:52.480902 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:36:52.480911 | orchestrator | 2026-04-09 01:36:52.480919 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:36:52.480926 | orchestrator | framework. For more detail see 2026-04-09 01:36:52.480933 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:36:52.480938 | orchestrator | 2026-04-09 01:36:52.480945 | orchestrator | __import__(import_str) 2026-04-09 01:36:53.896616 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:36:53.896704 | orchestrator | Did you mean one of these? 2026-04-09 01:36:53.896716 | orchestrator | help 2026-04-09 01:36:53.896725 | orchestrator | init 2026-04-09 01:36:54.302301 | orchestrator | 2026-04-09 01:36:54.302396 | orchestrator | ## OBJECT-STORE (API) 2026-04-09 01:36:54.302412 | orchestrator | 2026-04-09 01:36:54.302424 | orchestrator | + echo 2026-04-09 01:36:54.302435 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-09 01:36:54.302446 | orchestrator | + echo 2026-04-09 01:36:54.302457 | orchestrator | + _tempest tempest.api.object_storage 2026-04-09 01:36:54.302470 | orchestrator | + local regex=tempest.api.object_storage 2026-04-09 01:36:54.303656 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-09 01:36:54.305304 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:36:54.308622 | orchestrator | + tee -a /opt/tempest/20260409-0136.log 2026-04-09 01:36:56.369726 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:36:56.369805 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:36:56.369817 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:36:56.369827 | orchestrator | 2026-04-09 01:36:56.369836 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:36:56.369845 | orchestrator | framework. For more detail see 2026-04-09 01:36:56.369854 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:36:56.369861 | orchestrator | 2026-04-09 01:36:56.369870 | orchestrator | __import__(import_str) 2026-04-09 01:36:57.896738 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:36:57.896831 | orchestrator | Did you mean one of these? 2026-04-09 01:36:57.896872 | orchestrator | help 2026-04-09 01:36:57.896881 | orchestrator | init 2026-04-09 01:36:58.659321 | orchestrator | ok: Runtime: 0:01:56.907754 2026-04-09 01:36:58.679011 | 2026-04-09 01:36:58.679160 | TASK [Check prometheus alert status] 2026-04-09 01:36:59.215713 | orchestrator | skipping: Conditional result was False 2026-04-09 01:36:59.219200 | 2026-04-09 01:36:59.219418 | PLAY RECAP 2026-04-09 01:36:59.219584 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-09 01:36:59.219656 | 2026-04-09 01:36:59.445169 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-09 01:36:59.450904 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 01:37:00.191621 | 2026-04-09 01:37:00.191779 | PLAY [Post output play] 2026-04-09 01:37:00.207857 | 2026-04-09 01:37:00.208008 | LOOP [stage-output : Register sources] 2026-04-09 01:37:00.273250 | 2026-04-09 01:37:00.273530 | TASK [stage-output : Check sudo] 2026-04-09 01:37:01.137343 | orchestrator | sudo: a password is required 2026-04-09 01:37:01.328965 | orchestrator | ok: Runtime: 0:00:00.019919 2026-04-09 01:37:01.344897 | 2026-04-09 01:37:01.345096 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-09 01:37:01.381600 | 2026-04-09 01:37:01.381979 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-09 01:37:01.452516 | orchestrator | ok 2026-04-09 01:37:01.461910 | 2026-04-09 01:37:01.462053 | LOOP [stage-output : Ensure target folders exist] 2026-04-09 01:37:01.937424 | orchestrator | ok: "docs" 2026-04-09 01:37:01.937677 | 2026-04-09 01:37:02.226384 | orchestrator | ok: "artifacts" 2026-04-09 01:37:02.533364 | orchestrator | ok: "logs" 2026-04-09 01:37:02.547766 | 2026-04-09 01:37:02.547946 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-09 01:37:02.583221 | 2026-04-09 01:37:02.583500 | TASK [stage-output : Make all log files readable] 2026-04-09 01:37:02.926342 | orchestrator | ok 2026-04-09 01:37:02.935752 | 2026-04-09 01:37:02.935951 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-09 01:37:02.970348 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:02.980317 | 2026-04-09 01:37:02.980442 | TASK [stage-output : Discover log files for compression] 2026-04-09 01:37:03.004026 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:03.013562 | 2026-04-09 01:37:03.013688 | LOOP [stage-output : Archive everything from logs] 2026-04-09 01:37:03.054890 | 2026-04-09 01:37:03.055092 | PLAY [Post cleanup play] 2026-04-09 01:37:03.064071 | 2026-04-09 01:37:03.064188 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 01:37:03.132682 | orchestrator | ok 2026-04-09 01:37:03.144279 | 2026-04-09 01:37:03.144409 | TASK [Set cloud fact (local deployment)] 2026-04-09 01:37:03.178507 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:03.193143 | 2026-04-09 01:37:03.193296 | TASK [Clean the cloud environment] 2026-04-09 01:37:04.597035 | orchestrator | 2026-04-09 01:37:04 - clean up servers 2026-04-09 01:37:05.364082 | orchestrator | 2026-04-09 01:37:05 - testbed-manager 2026-04-09 01:37:05.469600 | orchestrator | 2026-04-09 01:37:05 - testbed-node-0 2026-04-09 01:37:05.568901 | orchestrator | 2026-04-09 01:37:05 - testbed-node-1 2026-04-09 01:37:05.662551 | orchestrator | 2026-04-09 01:37:05 - testbed-node-2 2026-04-09 01:37:05.763668 | orchestrator | 2026-04-09 01:37:05 - testbed-node-5 2026-04-09 01:37:05.864372 | orchestrator | 2026-04-09 01:37:05 - testbed-node-4 2026-04-09 01:37:05.965059 | orchestrator | 2026-04-09 01:37:05 - testbed-node-3 2026-04-09 01:37:06.058305 | orchestrator | 2026-04-09 01:37:06 - clean up keypairs 2026-04-09 01:37:06.077606 | orchestrator | 2026-04-09 01:37:06 - testbed 2026-04-09 01:37:06.114057 | orchestrator | 2026-04-09 01:37:06 - wait for servers to be gone 2026-04-09 01:37:19.224849 | orchestrator | 2026-04-09 01:37:19 - clean up ports 2026-04-09 01:37:19.438783 | orchestrator | 2026-04-09 01:37:19 - 5d04def5-83ec-4534-8bf2-536fde58e93c 2026-04-09 01:37:19.693797 | orchestrator | 2026-04-09 01:37:19 - a5d66f83-4790-4545-9f34-d68352d06404 2026-04-09 01:37:20.017111 | orchestrator | 2026-04-09 01:37:20 - dcfea8b6-40b3-4b4d-8637-e26cc6dceb82 2026-04-09 01:37:20.250743 | orchestrator | 2026-04-09 01:37:20 - e930c737-c01c-4952-b766-31e3c71b8261 2026-04-09 01:37:21.171940 | orchestrator | 2026-04-09 01:37:21 - f22127ec-dc47-479a-b1df-94e24f8933f6 2026-04-09 01:37:21.404528 | orchestrator | 2026-04-09 01:37:21 - f314979e-5950-4945-ac2e-15bfe53a052d 2026-04-09 01:37:21.686163 | orchestrator | 2026-04-09 01:37:21 - f660fee7-1790-4064-b35e-2ce263c95560 2026-04-09 01:37:21.931624 | orchestrator | 2026-04-09 01:37:21 - clean up volumes 2026-04-09 01:37:22.056290 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-0-node-base 2026-04-09 01:37:22.102793 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-4-node-base 2026-04-09 01:37:22.149152 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-2-node-base 2026-04-09 01:37:22.196133 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-manager-base 2026-04-09 01:37:22.236950 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-5-node-base 2026-04-09 01:37:22.283852 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-3-node-base 2026-04-09 01:37:22.329706 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-1-node-base 2026-04-09 01:37:22.376326 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-3-node-3 2026-04-09 01:37:22.421069 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-4-node-4 2026-04-09 01:37:22.464482 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-7-node-4 2026-04-09 01:37:22.510341 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-5-node-5 2026-04-09 01:37:22.556324 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-2-node-5 2026-04-09 01:37:22.600816 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-6-node-3 2026-04-09 01:37:22.643636 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-1-node-4 2026-04-09 01:37:22.689468 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-8-node-5 2026-04-09 01:37:22.740930 | orchestrator | 2026-04-09 01:37:22 - testbed-volume-0-node-3 2026-04-09 01:37:22.787863 | orchestrator | 2026-04-09 01:37:22 - disconnect routers 2026-04-09 01:37:22.932681 | orchestrator | 2026-04-09 01:37:22 - testbed 2026-04-09 01:37:24.016136 | orchestrator | 2026-04-09 01:37:24 - clean up subnets 2026-04-09 01:37:24.073823 | orchestrator | 2026-04-09 01:37:24 - subnet-testbed-management 2026-04-09 01:37:24.261522 | orchestrator | 2026-04-09 01:37:24 - clean up networks 2026-04-09 01:37:24.493262 | orchestrator | 2026-04-09 01:37:24 - net-testbed-management 2026-04-09 01:37:24.890469 | orchestrator | 2026-04-09 01:37:24 - clean up security groups 2026-04-09 01:37:24.936463 | orchestrator | 2026-04-09 01:37:24 - testbed-management 2026-04-09 01:37:25.209511 | orchestrator | 2026-04-09 01:37:25 - testbed-node 2026-04-09 01:37:25.333000 | orchestrator | 2026-04-09 01:37:25 - clean up floating ips 2026-04-09 01:37:25.385161 | orchestrator | 2026-04-09 01:37:25 - 81.163.193.5 2026-04-09 01:37:25.799605 | orchestrator | 2026-04-09 01:37:25 - clean up routers 2026-04-09 01:37:25.946014 | orchestrator | 2026-04-09 01:37:25 - testbed 2026-04-09 01:37:27.754707 | orchestrator | ok: Runtime: 0:00:24.169935 2026-04-09 01:37:27.759510 | 2026-04-09 01:37:27.759685 | PLAY RECAP 2026-04-09 01:37:27.759814 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-09 01:37:27.759877 | 2026-04-09 01:37:27.890329 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 01:37:27.891447 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 01:37:28.667917 | 2026-04-09 01:37:28.668099 | PLAY [Cleanup play] 2026-04-09 01:37:28.685102 | 2026-04-09 01:37:28.685257 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 01:37:28.742634 | orchestrator | ok 2026-04-09 01:37:28.752495 | 2026-04-09 01:37:28.752658 | TASK [Set cloud fact (local deployment)] 2026-04-09 01:37:28.777961 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:28.794596 | 2026-04-09 01:37:28.794807 | TASK [Clean the cloud environment] 2026-04-09 01:37:29.994332 | orchestrator | 2026-04-09 01:37:29 - clean up servers 2026-04-09 01:37:30.467063 | orchestrator | 2026-04-09 01:37:30 - clean up keypairs 2026-04-09 01:37:30.480756 | orchestrator | 2026-04-09 01:37:30 - wait for servers to be gone 2026-04-09 01:37:30.522422 | orchestrator | 2026-04-09 01:37:30 - clean up ports 2026-04-09 01:37:30.625957 | orchestrator | 2026-04-09 01:37:30 - clean up volumes 2026-04-09 01:37:30.709918 | orchestrator | 2026-04-09 01:37:30 - disconnect routers 2026-04-09 01:37:30.741459 | orchestrator | 2026-04-09 01:37:30 - clean up subnets 2026-04-09 01:37:30.777710 | orchestrator | 2026-04-09 01:37:30 - clean up networks 2026-04-09 01:37:30.938431 | orchestrator | 2026-04-09 01:37:30 - clean up security groups 2026-04-09 01:37:30.983074 | orchestrator | 2026-04-09 01:37:30 - clean up floating ips 2026-04-09 01:37:31.012184 | orchestrator | 2026-04-09 01:37:31 - clean up routers 2026-04-09 01:37:31.341264 | orchestrator | ok: Runtime: 0:00:01.434478 2026-04-09 01:37:31.344045 | 2026-04-09 01:37:31.344172 | PLAY RECAP 2026-04-09 01:37:31.344255 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-09 01:37:31.344294 | 2026-04-09 01:37:31.480555 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 01:37:31.483325 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 01:37:32.221162 | 2026-04-09 01:37:32.221330 | PLAY [Base post-fetch] 2026-04-09 01:37:32.238079 | 2026-04-09 01:37:32.238228 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-09 01:37:32.294108 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:32.309678 | 2026-04-09 01:37:32.309890 | TASK [fetch-output : Set log path for single node] 2026-04-09 01:37:32.368818 | orchestrator | ok 2026-04-09 01:37:32.379430 | 2026-04-09 01:37:32.379585 | LOOP [fetch-output : Ensure local output dirs] 2026-04-09 01:37:32.887182 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/work/logs" 2026-04-09 01:37:33.169692 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/work/artifacts" 2026-04-09 01:37:33.433244 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/534efbc076ff4f1292525425ed63042a/work/docs" 2026-04-09 01:37:33.460386 | 2026-04-09 01:37:33.460566 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-09 01:37:34.409005 | orchestrator | changed: .d..t...... ./ 2026-04-09 01:37:34.409557 | orchestrator | changed: All items complete 2026-04-09 01:37:34.410488 | 2026-04-09 01:37:35.157152 | orchestrator | changed: .d..t...... ./ 2026-04-09 01:37:35.894065 | orchestrator | changed: .d..t...... ./ 2026-04-09 01:37:35.919687 | 2026-04-09 01:37:35.919827 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-09 01:37:35.960155 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:35.962464 | orchestrator | skipping: Conditional result was False 2026-04-09 01:37:35.976118 | 2026-04-09 01:37:35.976228 | PLAY RECAP 2026-04-09 01:37:35.976297 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-09 01:37:35.976335 | 2026-04-09 01:37:36.106162 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 01:37:36.107283 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 01:37:36.856594 | 2026-04-09 01:37:36.856759 | PLAY [Base post] 2026-04-09 01:37:36.871485 | 2026-04-09 01:37:36.871621 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-09 01:37:37.881367 | orchestrator | changed 2026-04-09 01:37:37.891530 | 2026-04-09 01:37:37.891664 | PLAY RECAP 2026-04-09 01:37:37.891740 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-09 01:37:37.891815 | 2026-04-09 01:37:38.006300 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 01:37:38.007429 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-09 01:37:38.778654 | 2026-04-09 01:37:38.778826 | PLAY [Base post-logs] 2026-04-09 01:37:38.789534 | 2026-04-09 01:37:38.789662 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-09 01:37:39.233379 | localhost | changed 2026-04-09 01:37:39.251663 | 2026-04-09 01:37:39.251866 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-09 01:37:39.289137 | localhost | ok 2026-04-09 01:37:39.293378 | 2026-04-09 01:37:39.293507 | TASK [Set zuul-log-path fact] 2026-04-09 01:37:39.319599 | localhost | ok 2026-04-09 01:37:39.330354 | 2026-04-09 01:37:39.330503 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 01:37:39.366770 | localhost | ok 2026-04-09 01:37:39.371299 | 2026-04-09 01:37:39.371447 | TASK [upload-logs : Create log directories] 2026-04-09 01:37:39.880572 | localhost | changed 2026-04-09 01:37:39.885618 | 2026-04-09 01:37:39.885783 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-09 01:37:40.399362 | localhost -> localhost | ok: Runtime: 0:00:00.006719 2026-04-09 01:37:40.409588 | 2026-04-09 01:37:40.409772 | TASK [upload-logs : Upload logs to log server] 2026-04-09 01:37:40.983950 | localhost | Output suppressed because no_log was given 2026-04-09 01:37:40.986128 | 2026-04-09 01:37:40.986251 | LOOP [upload-logs : Compress console log and json output] 2026-04-09 01:37:41.043571 | localhost | skipping: Conditional result was False 2026-04-09 01:37:41.049147 | localhost | skipping: Conditional result was False 2026-04-09 01:37:41.057497 | 2026-04-09 01:37:41.057728 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-09 01:37:41.105355 | localhost | skipping: Conditional result was False 2026-04-09 01:37:41.106092 | 2026-04-09 01:37:41.109173 | localhost | skipping: Conditional result was False 2026-04-09 01:37:41.122502 | 2026-04-09 01:37:41.122720 | LOOP [upload-logs : Upload console log and json output]